00:00:00.000 Started by upstream project "autotest-per-patch" build number 132504 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.050 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.969 The recommended git tool is: git 00:00:00.969 using credential 00000000-0000-0000-0000-000000000002 00:00:00.971 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.982 Fetching changes from the remote Git repository 00:00:00.985 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.996 Using shallow fetch with depth 1 00:00:00.996 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.996 > git --version # timeout=10 00:00:01.007 > git --version # 'git version 2.39.2' 00:00:01.007 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.018 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.018 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.298 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.310 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.321 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.321 > git config core.sparsecheckout # timeout=10 00:00:06.332 > git read-tree -mu HEAD # timeout=10 00:00:06.347 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.366 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.366 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.441 [Pipeline] Start of Pipeline 00:00:06.455 [Pipeline] library 00:00:06.457 Loading library shm_lib@master 00:00:06.457 Library shm_lib@master is cached. Copying from home. 00:00:06.477 [Pipeline] node 00:00:06.597 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.599 [Pipeline] { 00:00:06.608 [Pipeline] catchError 00:00:06.609 [Pipeline] { 00:00:06.620 [Pipeline] wrap 00:00:06.626 [Pipeline] { 00:00:06.634 [Pipeline] stage 00:00:06.637 [Pipeline] { (Prologue) 00:00:06.823 [Pipeline] sh 00:00:07.112 + logger -p user.info -t JENKINS-CI 00:00:07.134 [Pipeline] echo 00:00:07.136 Node: GP6 00:00:07.145 [Pipeline] sh 00:00:07.454 [Pipeline] setCustomBuildProperty 00:00:07.469 [Pipeline] echo 00:00:07.471 Cleanup processes 00:00:07.477 [Pipeline] sh 00:00:07.765 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.765 2972512 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.778 [Pipeline] sh 00:00:08.063 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.063 ++ grep -v 'sudo pgrep' 00:00:08.063 ++ awk '{print $1}' 00:00:08.063 + sudo kill -9 00:00:08.063 + true 00:00:08.077 [Pipeline] cleanWs 00:00:08.087 [WS-CLEANUP] Deleting project workspace... 00:00:08.087 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.094 [WS-CLEANUP] done 00:00:08.099 [Pipeline] setCustomBuildProperty 00:00:08.113 [Pipeline] sh 00:00:08.402 + sudo git config --global --replace-all safe.directory '*' 00:00:08.509 [Pipeline] httpRequest 00:00:09.452 [Pipeline] echo 00:00:09.453 Sorcerer 10.211.164.20 is alive 00:00:09.460 [Pipeline] retry 00:00:09.462 [Pipeline] { 00:00:09.470 [Pipeline] httpRequest 00:00:09.474 HttpMethod: GET 00:00:09.475 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.475 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.481 Response Code: HTTP/1.1 200 OK 00:00:09.481 Success: Status code 200 is in the accepted range: 200,404 00:00:09.482 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.435 [Pipeline] } 00:00:24.452 [Pipeline] // retry 00:00:24.523 [Pipeline] sh 00:00:24.808 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.825 [Pipeline] httpRequest 00:00:25.465 [Pipeline] echo 00:00:25.466 Sorcerer 10.211.164.20 is alive 00:00:25.476 [Pipeline] retry 00:00:25.478 [Pipeline] { 00:00:25.491 [Pipeline] httpRequest 00:00:25.495 HttpMethod: GET 00:00:25.496 URL: http://10.211.164.20/packages/spdk_9b39915713e18826af1c14c6c4638cf0b83fa357.tar.gz 00:00:25.496 Sending request to url: http://10.211.164.20/packages/spdk_9b39915713e18826af1c14c6c4638cf0b83fa357.tar.gz 00:00:25.533 Response Code: HTTP/1.1 200 OK 00:00:25.534 Success: Status code 200 is in the accepted range: 200,404 00:00:25.534 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9b39915713e18826af1c14c6c4638cf0b83fa357.tar.gz 00:04:28.209 [Pipeline] } 00:04:28.227 [Pipeline] // retry 00:04:28.235 [Pipeline] sh 00:04:28.521 + tar --no-same-owner -xf spdk_9b39915713e18826af1c14c6c4638cf0b83fa357.tar.gz 00:04:31.816 [Pipeline] sh 00:04:32.100 + git -C spdk log --oneline -n5 00:04:32.100 9b3991571 nvme: add poll_group interrupt callback 00:04:32.100 f1dd81af3 nvme: add spdk_nvme_poll_group_get_fd_group() 00:04:32.100 4da34a829 thread: fd_group-based interrupts 00:04:32.100 10ec63d4e thread: move interrupt allocation to a function 00:04:32.100 393e80fcd util: add method for setting fd_group's wrapper 00:04:32.111 [Pipeline] } 00:04:32.125 [Pipeline] // stage 00:04:32.134 [Pipeline] stage 00:04:32.136 [Pipeline] { (Prepare) 00:04:32.151 [Pipeline] writeFile 00:04:32.168 [Pipeline] sh 00:04:32.454 + logger -p user.info -t JENKINS-CI 00:04:32.468 [Pipeline] sh 00:04:32.754 + logger -p user.info -t JENKINS-CI 00:04:32.767 [Pipeline] sh 00:04:33.053 + cat autorun-spdk.conf 00:04:33.053 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:33.053 SPDK_TEST_NVMF=1 00:04:33.053 SPDK_TEST_NVME_CLI=1 00:04:33.053 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:33.053 SPDK_TEST_NVMF_NICS=e810 00:04:33.053 SPDK_TEST_VFIOUSER=1 00:04:33.053 SPDK_RUN_UBSAN=1 00:04:33.053 NET_TYPE=phy 00:04:33.061 RUN_NIGHTLY=0 00:04:33.065 [Pipeline] readFile 00:04:33.099 [Pipeline] withEnv 00:04:33.101 [Pipeline] { 00:04:33.117 [Pipeline] sh 00:04:33.404 + set -ex 00:04:33.404 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:33.404 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:33.404 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:33.404 ++ SPDK_TEST_NVMF=1 00:04:33.404 ++ SPDK_TEST_NVME_CLI=1 00:04:33.404 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:33.404 ++ SPDK_TEST_NVMF_NICS=e810 00:04:33.404 ++ SPDK_TEST_VFIOUSER=1 00:04:33.404 ++ SPDK_RUN_UBSAN=1 00:04:33.404 ++ NET_TYPE=phy 00:04:33.404 ++ RUN_NIGHTLY=0 00:04:33.404 + case $SPDK_TEST_NVMF_NICS in 00:04:33.404 + DRIVERS=ice 00:04:33.404 + [[ tcp == \r\d\m\a ]] 00:04:33.404 + [[ -n ice ]] 00:04:33.404 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:33.404 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:37.607 rmmod: ERROR: Module irdma is not currently loaded 00:04:37.607 rmmod: ERROR: Module i40iw is not currently loaded 00:04:37.607 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:37.607 + true 00:04:37.607 + for D in $DRIVERS 00:04:37.607 + sudo modprobe ice 00:04:37.607 + exit 0 00:04:37.617 [Pipeline] } 00:04:37.632 [Pipeline] // withEnv 00:04:37.637 [Pipeline] } 00:04:37.651 [Pipeline] // stage 00:04:37.664 [Pipeline] catchError 00:04:37.668 [Pipeline] { 00:04:37.686 [Pipeline] timeout 00:04:37.686 Timeout set to expire in 1 hr 0 min 00:04:37.688 [Pipeline] { 00:04:37.704 [Pipeline] stage 00:04:37.706 [Pipeline] { (Tests) 00:04:37.722 [Pipeline] sh 00:04:38.009 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:38.009 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:38.009 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:38.009 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:38.009 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.009 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:38.009 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:38.009 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:38.009 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:38.009 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:38.009 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:38.009 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:38.009 + source /etc/os-release 00:04:38.009 ++ NAME='Fedora Linux' 00:04:38.009 ++ VERSION='39 (Cloud Edition)' 00:04:38.009 ++ ID=fedora 00:04:38.009 ++ VERSION_ID=39 00:04:38.009 ++ VERSION_CODENAME= 00:04:38.009 ++ PLATFORM_ID=platform:f39 00:04:38.009 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:38.009 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:38.009 ++ LOGO=fedora-logo-icon 00:04:38.009 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:38.009 ++ HOME_URL=https://fedoraproject.org/ 00:04:38.009 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:38.009 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:38.009 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:38.009 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:38.009 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:38.009 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:38.009 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:38.009 ++ SUPPORT_END=2024-11-12 00:04:38.009 ++ VARIANT='Cloud Edition' 00:04:38.009 ++ VARIANT_ID=cloud 00:04:38.009 + uname -a 00:04:38.009 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:38.009 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:38.946 Hugepages 00:04:38.947 node hugesize free / total 00:04:38.947 node0 1048576kB 0 / 0 00:04:38.947 node0 2048kB 0 / 0 00:04:38.947 node1 1048576kB 0 / 0 00:04:38.947 node1 2048kB 0 / 0 00:04:38.947 00:04:38.947 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.947 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:38.947 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:38.947 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:38.947 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:38.947 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:38.947 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:38.947 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:38.947 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:39.207 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:39.207 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:39.207 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:39.207 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:39.207 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:39.207 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:39.207 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:39.207 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:39.207 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:39.207 + rm -f /tmp/spdk-ld-path 00:04:39.207 + source autorun-spdk.conf 00:04:39.207 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:39.207 ++ SPDK_TEST_NVMF=1 00:04:39.207 ++ SPDK_TEST_NVME_CLI=1 00:04:39.207 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:39.207 ++ SPDK_TEST_NVMF_NICS=e810 00:04:39.207 ++ SPDK_TEST_VFIOUSER=1 00:04:39.207 ++ SPDK_RUN_UBSAN=1 00:04:39.207 ++ NET_TYPE=phy 00:04:39.207 ++ RUN_NIGHTLY=0 00:04:39.207 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:39.207 + [[ -n '' ]] 00:04:39.207 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:39.207 + for M in /var/spdk/build-*-manifest.txt 00:04:39.207 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:39.207 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:39.207 + for M in /var/spdk/build-*-manifest.txt 00:04:39.207 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:39.207 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:39.207 + for M in /var/spdk/build-*-manifest.txt 00:04:39.207 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:39.207 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:39.207 ++ uname 00:04:39.207 + [[ Linux == \L\i\n\u\x ]] 00:04:39.207 + sudo dmesg -T 00:04:39.207 + sudo dmesg --clear 00:04:39.207 + dmesg_pid=2974471 00:04:39.207 + sudo dmesg -Tw 00:04:39.207 + [[ Fedora Linux == FreeBSD ]] 00:04:39.207 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:39.207 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:39.207 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:39.207 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:04:39.207 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:04:39.207 + [[ -x /usr/src/fio-static/fio ]] 00:04:39.207 + export FIO_BIN=/usr/src/fio-static/fio 00:04:39.207 + FIO_BIN=/usr/src/fio-static/fio 00:04:39.207 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:39.207 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:39.207 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:39.207 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:39.207 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:39.207 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:39.207 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:39.207 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:39.207 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:39.207 13:03:36 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:39.207 13:03:36 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:39.207 13:03:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:39.207 13:03:36 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:39.207 13:03:36 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:39.207 13:03:36 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:39.207 13:03:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.207 13:03:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:39.207 13:03:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:39.207 13:03:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.207 13:03:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.207 13:03:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.207 13:03:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.207 13:03:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.207 13:03:36 -- paths/export.sh@5 -- $ export PATH 00:04:39.208 13:03:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.208 13:03:36 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:39.208 13:03:36 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:39.208 13:03:36 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732536216.XXXXXX 00:04:39.208 13:03:36 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732536216.18CKw3 00:04:39.208 13:03:36 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:39.208 13:03:36 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:39.208 13:03:36 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:39.208 13:03:36 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:39.208 13:03:36 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:39.468 13:03:36 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:39.468 13:03:36 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:39.468 13:03:36 -- common/autotest_common.sh@10 -- $ set +x 00:04:39.468 13:03:36 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:39.468 13:03:36 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:39.468 13:03:36 -- pm/common@17 -- $ local monitor 00:04:39.468 13:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.468 13:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.468 13:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.468 13:03:36 -- pm/common@21 -- $ date +%s 00:04:39.468 13:03:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:39.468 13:03:36 -- pm/common@21 -- $ date +%s 00:04:39.468 13:03:36 -- pm/common@25 -- $ sleep 1 00:04:39.468 13:03:36 -- pm/common@21 -- $ date +%s 00:04:39.468 13:03:36 -- pm/common@21 -- $ date +%s 00:04:39.468 13:03:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732536216 00:04:39.468 13:03:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732536216 00:04:39.468 13:03:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732536216 00:04:39.468 13:03:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732536216 00:04:39.468 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732536216_collect-vmstat.pm.log 00:04:39.468 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732536216_collect-cpu-load.pm.log 00:04:39.468 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732536216_collect-cpu-temp.pm.log 00:04:39.468 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732536216_collect-bmc-pm.bmc.pm.log 00:04:40.408 13:03:37 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:40.408 13:03:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:40.408 13:03:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:40.408 13:03:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:40.408 13:03:37 -- spdk/autobuild.sh@16 -- $ date -u 00:04:40.408 Mon Nov 25 12:03:37 PM UTC 2024 00:04:40.408 13:03:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:40.408 v25.01-pre-226-g9b3991571 00:04:40.408 13:03:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:40.408 13:03:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:40.408 13:03:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:40.408 13:03:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:40.408 13:03:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:40.408 13:03:37 -- common/autotest_common.sh@10 -- $ set +x 00:04:40.408 ************************************ 00:04:40.408 START TEST ubsan 00:04:40.408 ************************************ 00:04:40.408 13:03:37 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:40.408 using ubsan 00:04:40.408 00:04:40.408 real 0m0.000s 00:04:40.408 user 0m0.000s 00:04:40.408 sys 0m0.000s 00:04:40.408 13:03:37 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:40.408 13:03:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:40.408 ************************************ 00:04:40.408 END TEST ubsan 00:04:40.408 ************************************ 00:04:40.408 13:03:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:40.408 13:03:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:40.408 13:03:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:40.408 13:03:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:40.408 13:03:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:40.408 13:03:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:40.408 13:03:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:40.408 13:03:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:40.409 13:03:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:40.409 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:40.409 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:40.669 Using 'verbs' RDMA provider 00:04:51.589 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:01.619 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:01.619 Creating mk/config.mk...done. 00:05:01.619 Creating mk/cc.flags.mk...done. 00:05:01.619 Type 'make' to build. 00:05:01.619 13:03:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:05:01.619 13:03:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:01.619 13:03:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:01.619 13:03:59 -- common/autotest_common.sh@10 -- $ set +x 00:05:01.619 ************************************ 00:05:01.619 START TEST make 00:05:01.619 ************************************ 00:05:01.619 13:03:59 make -- common/autotest_common.sh@1129 -- $ make -j48 00:05:01.876 make[1]: Nothing to be done for 'all'. 00:05:03.793 The Meson build system 00:05:03.793 Version: 1.5.0 00:05:03.793 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:03.793 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:03.793 Build type: native build 00:05:03.793 Project name: libvfio-user 00:05:03.793 Project version: 0.0.1 00:05:03.793 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:03.793 C linker for the host machine: cc ld.bfd 2.40-14 00:05:03.793 Host machine cpu family: x86_64 00:05:03.793 Host machine cpu: x86_64 00:05:03.793 Run-time dependency threads found: YES 00:05:03.793 Library dl found: YES 00:05:03.793 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:03.793 Run-time dependency json-c found: YES 0.17 00:05:03.793 Run-time dependency cmocka found: YES 1.1.7 00:05:03.793 Program pytest-3 found: NO 00:05:03.793 Program flake8 found: NO 00:05:03.793 Program misspell-fixer found: NO 00:05:03.793 Program restructuredtext-lint found: NO 00:05:03.793 Program valgrind found: YES (/usr/bin/valgrind) 00:05:03.793 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:03.793 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:03.793 Compiler for C supports arguments -Wwrite-strings: YES 00:05:03.793 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:03.793 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:03.793 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:03.793 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:03.793 Build targets in project: 8 00:05:03.793 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:03.793 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:03.793 00:05:03.793 libvfio-user 0.0.1 00:05:03.793 00:05:03.793 User defined options 00:05:03.793 buildtype : debug 00:05:03.793 default_library: shared 00:05:03.793 libdir : /usr/local/lib 00:05:03.793 00:05:03.793 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:04.736 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:04.736 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:04.736 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:04.736 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:04.736 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:04.736 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:04.736 [6/37] Compiling C object samples/null.p/null.c.o 00:05:05.000 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:05.000 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:05.000 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:05.000 [10/37] Compiling C object samples/server.p/server.c.o 00:05:05.000 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:05.000 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:05.000 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:05.000 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:05.000 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:05.000 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:05.000 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:05.000 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:05.000 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:05.000 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:05.000 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:05.000 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:05.000 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:05.000 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:05.000 [25/37] Compiling C object samples/client.p/client.c.o 00:05:05.000 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:05.000 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:05.000 [28/37] Linking target samples/client 00:05:05.000 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:05:05.265 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:05.265 [31/37] Linking target test/unit_tests 00:05:05.265 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:05.265 [33/37] Linking target samples/gpio-pci-idio-16 00:05:05.265 [34/37] Linking target samples/server 00:05:05.265 [35/37] Linking target samples/shadow_ioeventfd_server 00:05:05.265 [36/37] Linking target samples/null 00:05:05.265 [37/37] Linking target samples/lspci 00:05:05.265 INFO: autodetecting backend as ninja 00:05:05.265 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:05.527 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:06.473 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:06.473 ninja: no work to do. 00:05:11.743 The Meson build system 00:05:11.743 Version: 1.5.0 00:05:11.743 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:11.743 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:11.743 Build type: native build 00:05:11.743 Program cat found: YES (/usr/bin/cat) 00:05:11.743 Project name: DPDK 00:05:11.743 Project version: 24.03.0 00:05:11.743 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:11.743 C linker for the host machine: cc ld.bfd 2.40-14 00:05:11.743 Host machine cpu family: x86_64 00:05:11.743 Host machine cpu: x86_64 00:05:11.743 Message: ## Building in Developer Mode ## 00:05:11.743 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:11.743 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:11.743 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:11.743 Program python3 found: YES (/usr/bin/python3) 00:05:11.743 Program cat found: YES (/usr/bin/cat) 00:05:11.743 Compiler for C supports arguments -march=native: YES 00:05:11.743 Checking for size of "void *" : 8 00:05:11.743 Checking for size of "void *" : 8 (cached) 00:05:11.743 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:11.743 Library m found: YES 00:05:11.743 Library numa found: YES 00:05:11.743 Has header "numaif.h" : YES 00:05:11.743 Library fdt found: NO 00:05:11.743 Library execinfo found: NO 00:05:11.743 Has header "execinfo.h" : YES 00:05:11.743 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:11.743 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:11.743 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:11.743 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:11.743 Run-time dependency openssl found: YES 3.1.1 00:05:11.743 Run-time dependency libpcap found: YES 1.10.4 00:05:11.743 Has header "pcap.h" with dependency libpcap: YES 00:05:11.743 Compiler for C supports arguments -Wcast-qual: YES 00:05:11.743 Compiler for C supports arguments -Wdeprecated: YES 00:05:11.743 Compiler for C supports arguments -Wformat: YES 00:05:11.743 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:11.743 Compiler for C supports arguments -Wformat-security: NO 00:05:11.743 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:11.743 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:11.743 Compiler for C supports arguments -Wnested-externs: YES 00:05:11.743 Compiler for C supports arguments -Wold-style-definition: YES 00:05:11.743 Compiler for C supports arguments -Wpointer-arith: YES 00:05:11.743 Compiler for C supports arguments -Wsign-compare: YES 00:05:11.743 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:11.743 Compiler for C supports arguments -Wundef: YES 00:05:11.743 Compiler for C supports arguments -Wwrite-strings: YES 00:05:11.743 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:11.743 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:11.743 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:11.743 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:11.743 Program objdump found: YES (/usr/bin/objdump) 00:05:11.743 Compiler for C supports arguments -mavx512f: YES 00:05:11.743 Checking if "AVX512 checking" compiles: YES 00:05:11.743 Fetching value of define "__SSE4_2__" : 1 00:05:11.743 Fetching value of define "__AES__" : 1 00:05:11.743 Fetching value of define "__AVX__" : 1 00:05:11.743 Fetching value of define "__AVX2__" : (undefined) 00:05:11.743 Fetching value of define "__AVX512BW__" : (undefined) 00:05:11.743 Fetching value of define "__AVX512CD__" : (undefined) 00:05:11.743 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:11.743 Fetching value of define "__AVX512F__" : (undefined) 00:05:11.743 Fetching value of define "__AVX512VL__" : (undefined) 00:05:11.743 Fetching value of define "__PCLMUL__" : 1 00:05:11.743 Fetching value of define "__RDRND__" : 1 00:05:11.743 Fetching value of define "__RDSEED__" : (undefined) 00:05:11.743 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:11.743 Fetching value of define "__znver1__" : (undefined) 00:05:11.743 Fetching value of define "__znver2__" : (undefined) 00:05:11.743 Fetching value of define "__znver3__" : (undefined) 00:05:11.743 Fetching value of define "__znver4__" : (undefined) 00:05:11.743 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:11.743 Message: lib/log: Defining dependency "log" 00:05:11.743 Message: lib/kvargs: Defining dependency "kvargs" 00:05:11.743 Message: lib/telemetry: Defining dependency "telemetry" 00:05:11.743 Checking for function "getentropy" : NO 00:05:11.743 Message: lib/eal: Defining dependency "eal" 00:05:11.743 Message: lib/ring: Defining dependency "ring" 00:05:11.743 Message: lib/rcu: Defining dependency "rcu" 00:05:11.743 Message: lib/mempool: Defining dependency "mempool" 00:05:11.743 Message: lib/mbuf: Defining dependency "mbuf" 00:05:11.743 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:11.743 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:11.743 Compiler for C supports arguments -mpclmul: YES 00:05:11.743 Compiler for C supports arguments -maes: YES 00:05:11.743 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:11.743 Compiler for C supports arguments -mavx512bw: YES 00:05:11.743 Compiler for C supports arguments -mavx512dq: YES 00:05:11.743 Compiler for C supports arguments -mavx512vl: YES 00:05:11.743 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:11.743 Compiler for C supports arguments -mavx2: YES 00:05:11.743 Compiler for C supports arguments -mavx: YES 00:05:11.743 Message: lib/net: Defining dependency "net" 00:05:11.743 Message: lib/meter: Defining dependency "meter" 00:05:11.743 Message: lib/ethdev: Defining dependency "ethdev" 00:05:11.743 Message: lib/pci: Defining dependency "pci" 00:05:11.743 Message: lib/cmdline: Defining dependency "cmdline" 00:05:11.743 Message: lib/hash: Defining dependency "hash" 00:05:11.743 Message: lib/timer: Defining dependency "timer" 00:05:11.743 Message: lib/compressdev: Defining dependency "compressdev" 00:05:11.743 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:11.743 Message: lib/dmadev: Defining dependency "dmadev" 00:05:11.743 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:11.743 Message: lib/power: Defining dependency "power" 00:05:11.743 Message: lib/reorder: Defining dependency "reorder" 00:05:11.743 Message: lib/security: Defining dependency "security" 00:05:11.743 Has header "linux/userfaultfd.h" : YES 00:05:11.743 Has header "linux/vduse.h" : YES 00:05:11.743 Message: lib/vhost: Defining dependency "vhost" 00:05:11.743 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:11.743 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:11.743 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:11.743 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:11.743 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:11.743 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:11.743 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:11.743 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:11.743 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:11.743 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:11.743 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:11.743 Configuring doxy-api-html.conf using configuration 00:05:11.743 Configuring doxy-api-man.conf using configuration 00:05:11.743 Program mandb found: YES (/usr/bin/mandb) 00:05:11.743 Program sphinx-build found: NO 00:05:11.743 Configuring rte_build_config.h using configuration 00:05:11.743 Message: 00:05:11.743 ================= 00:05:11.743 Applications Enabled 00:05:11.743 ================= 00:05:11.743 00:05:11.743 apps: 00:05:11.743 00:05:11.743 00:05:11.743 Message: 00:05:11.743 ================= 00:05:11.743 Libraries Enabled 00:05:11.743 ================= 00:05:11.743 00:05:11.743 libs: 00:05:11.743 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:11.743 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:11.743 cryptodev, dmadev, power, reorder, security, vhost, 00:05:11.743 00:05:11.743 Message: 00:05:11.743 =============== 00:05:11.743 Drivers Enabled 00:05:11.743 =============== 00:05:11.743 00:05:11.743 common: 00:05:11.743 00:05:11.743 bus: 00:05:11.743 pci, vdev, 00:05:11.743 mempool: 00:05:11.743 ring, 00:05:11.743 dma: 00:05:11.743 00:05:11.743 net: 00:05:11.743 00:05:11.743 crypto: 00:05:11.743 00:05:11.743 compress: 00:05:11.743 00:05:11.743 vdpa: 00:05:11.743 00:05:11.743 00:05:11.743 Message: 00:05:11.743 ================= 00:05:11.743 Content Skipped 00:05:11.743 ================= 00:05:11.743 00:05:11.743 apps: 00:05:11.743 dumpcap: explicitly disabled via build config 00:05:11.743 graph: explicitly disabled via build config 00:05:11.743 pdump: explicitly disabled via build config 00:05:11.743 proc-info: explicitly disabled via build config 00:05:11.743 test-acl: explicitly disabled via build config 00:05:11.743 test-bbdev: explicitly disabled via build config 00:05:11.743 test-cmdline: explicitly disabled via build config 00:05:11.743 test-compress-perf: explicitly disabled via build config 00:05:11.743 test-crypto-perf: explicitly disabled via build config 00:05:11.743 test-dma-perf: explicitly disabled via build config 00:05:11.743 test-eventdev: explicitly disabled via build config 00:05:11.743 test-fib: explicitly disabled via build config 00:05:11.743 test-flow-perf: explicitly disabled via build config 00:05:11.743 test-gpudev: explicitly disabled via build config 00:05:11.743 test-mldev: explicitly disabled via build config 00:05:11.743 test-pipeline: explicitly disabled via build config 00:05:11.743 test-pmd: explicitly disabled via build config 00:05:11.743 test-regex: explicitly disabled via build config 00:05:11.743 test-sad: explicitly disabled via build config 00:05:11.743 test-security-perf: explicitly disabled via build config 00:05:11.743 00:05:11.743 libs: 00:05:11.743 argparse: explicitly disabled via build config 00:05:11.743 metrics: explicitly disabled via build config 00:05:11.743 acl: explicitly disabled via build config 00:05:11.743 bbdev: explicitly disabled via build config 00:05:11.743 bitratestats: explicitly disabled via build config 00:05:11.743 bpf: explicitly disabled via build config 00:05:11.743 cfgfile: explicitly disabled via build config 00:05:11.743 distributor: explicitly disabled via build config 00:05:11.743 efd: explicitly disabled via build config 00:05:11.743 eventdev: explicitly disabled via build config 00:05:11.743 dispatcher: explicitly disabled via build config 00:05:11.743 gpudev: explicitly disabled via build config 00:05:11.743 gro: explicitly disabled via build config 00:05:11.743 gso: explicitly disabled via build config 00:05:11.743 ip_frag: explicitly disabled via build config 00:05:11.743 jobstats: explicitly disabled via build config 00:05:11.743 latencystats: explicitly disabled via build config 00:05:11.743 lpm: explicitly disabled via build config 00:05:11.743 member: explicitly disabled via build config 00:05:11.743 pcapng: explicitly disabled via build config 00:05:11.743 rawdev: explicitly disabled via build config 00:05:11.743 regexdev: explicitly disabled via build config 00:05:11.743 mldev: explicitly disabled via build config 00:05:11.743 rib: explicitly disabled via build config 00:05:11.743 sched: explicitly disabled via build config 00:05:11.743 stack: explicitly disabled via build config 00:05:11.743 ipsec: explicitly disabled via build config 00:05:11.743 pdcp: explicitly disabled via build config 00:05:11.743 fib: explicitly disabled via build config 00:05:11.743 port: explicitly disabled via build config 00:05:11.743 pdump: explicitly disabled via build config 00:05:11.743 table: explicitly disabled via build config 00:05:11.743 pipeline: explicitly disabled via build config 00:05:11.743 graph: explicitly disabled via build config 00:05:11.743 node: explicitly disabled via build config 00:05:11.743 00:05:11.743 drivers: 00:05:11.743 common/cpt: not in enabled drivers build config 00:05:11.743 common/dpaax: not in enabled drivers build config 00:05:11.743 common/iavf: not in enabled drivers build config 00:05:11.744 common/idpf: not in enabled drivers build config 00:05:11.744 common/ionic: not in enabled drivers build config 00:05:11.744 common/mvep: not in enabled drivers build config 00:05:11.744 common/octeontx: not in enabled drivers build config 00:05:11.744 bus/auxiliary: not in enabled drivers build config 00:05:11.744 bus/cdx: not in enabled drivers build config 00:05:11.744 bus/dpaa: not in enabled drivers build config 00:05:11.744 bus/fslmc: not in enabled drivers build config 00:05:11.744 bus/ifpga: not in enabled drivers build config 00:05:11.744 bus/platform: not in enabled drivers build config 00:05:11.744 bus/uacce: not in enabled drivers build config 00:05:11.744 bus/vmbus: not in enabled drivers build config 00:05:11.744 common/cnxk: not in enabled drivers build config 00:05:11.744 common/mlx5: not in enabled drivers build config 00:05:11.744 common/nfp: not in enabled drivers build config 00:05:11.744 common/nitrox: not in enabled drivers build config 00:05:11.744 common/qat: not in enabled drivers build config 00:05:11.744 common/sfc_efx: not in enabled drivers build config 00:05:11.744 mempool/bucket: not in enabled drivers build config 00:05:11.744 mempool/cnxk: not in enabled drivers build config 00:05:11.744 mempool/dpaa: not in enabled drivers build config 00:05:11.744 mempool/dpaa2: not in enabled drivers build config 00:05:11.744 mempool/octeontx: not in enabled drivers build config 00:05:11.744 mempool/stack: not in enabled drivers build config 00:05:11.744 dma/cnxk: not in enabled drivers build config 00:05:11.744 dma/dpaa: not in enabled drivers build config 00:05:11.744 dma/dpaa2: not in enabled drivers build config 00:05:11.744 dma/hisilicon: not in enabled drivers build config 00:05:11.744 dma/idxd: not in enabled drivers build config 00:05:11.744 dma/ioat: not in enabled drivers build config 00:05:11.744 dma/skeleton: not in enabled drivers build config 00:05:11.744 net/af_packet: not in enabled drivers build config 00:05:11.744 net/af_xdp: not in enabled drivers build config 00:05:11.744 net/ark: not in enabled drivers build config 00:05:11.744 net/atlantic: not in enabled drivers build config 00:05:11.744 net/avp: not in enabled drivers build config 00:05:11.744 net/axgbe: not in enabled drivers build config 00:05:11.744 net/bnx2x: not in enabled drivers build config 00:05:11.744 net/bnxt: not in enabled drivers build config 00:05:11.744 net/bonding: not in enabled drivers build config 00:05:11.744 net/cnxk: not in enabled drivers build config 00:05:11.744 net/cpfl: not in enabled drivers build config 00:05:11.744 net/cxgbe: not in enabled drivers build config 00:05:11.744 net/dpaa: not in enabled drivers build config 00:05:11.744 net/dpaa2: not in enabled drivers build config 00:05:11.744 net/e1000: not in enabled drivers build config 00:05:11.744 net/ena: not in enabled drivers build config 00:05:11.744 net/enetc: not in enabled drivers build config 00:05:11.744 net/enetfec: not in enabled drivers build config 00:05:11.744 net/enic: not in enabled drivers build config 00:05:11.744 net/failsafe: not in enabled drivers build config 00:05:11.744 net/fm10k: not in enabled drivers build config 00:05:11.744 net/gve: not in enabled drivers build config 00:05:11.744 net/hinic: not in enabled drivers build config 00:05:11.744 net/hns3: not in enabled drivers build config 00:05:11.744 net/i40e: not in enabled drivers build config 00:05:11.744 net/iavf: not in enabled drivers build config 00:05:11.744 net/ice: not in enabled drivers build config 00:05:11.744 net/idpf: not in enabled drivers build config 00:05:11.744 net/igc: not in enabled drivers build config 00:05:11.744 net/ionic: not in enabled drivers build config 00:05:11.744 net/ipn3ke: not in enabled drivers build config 00:05:11.744 net/ixgbe: not in enabled drivers build config 00:05:11.744 net/mana: not in enabled drivers build config 00:05:11.744 net/memif: not in enabled drivers build config 00:05:11.744 net/mlx4: not in enabled drivers build config 00:05:11.744 net/mlx5: not in enabled drivers build config 00:05:11.744 net/mvneta: not in enabled drivers build config 00:05:11.744 net/mvpp2: not in enabled drivers build config 00:05:11.744 net/netvsc: not in enabled drivers build config 00:05:11.744 net/nfb: not in enabled drivers build config 00:05:11.744 net/nfp: not in enabled drivers build config 00:05:11.744 net/ngbe: not in enabled drivers build config 00:05:11.744 net/null: not in enabled drivers build config 00:05:11.744 net/octeontx: not in enabled drivers build config 00:05:11.744 net/octeon_ep: not in enabled drivers build config 00:05:11.744 net/pcap: not in enabled drivers build config 00:05:11.744 net/pfe: not in enabled drivers build config 00:05:11.744 net/qede: not in enabled drivers build config 00:05:11.744 net/ring: not in enabled drivers build config 00:05:11.744 net/sfc: not in enabled drivers build config 00:05:11.744 net/softnic: not in enabled drivers build config 00:05:11.744 net/tap: not in enabled drivers build config 00:05:11.744 net/thunderx: not in enabled drivers build config 00:05:11.744 net/txgbe: not in enabled drivers build config 00:05:11.744 net/vdev_netvsc: not in enabled drivers build config 00:05:11.744 net/vhost: not in enabled drivers build config 00:05:11.744 net/virtio: not in enabled drivers build config 00:05:11.744 net/vmxnet3: not in enabled drivers build config 00:05:11.744 raw/*: missing internal dependency, "rawdev" 00:05:11.744 crypto/armv8: not in enabled drivers build config 00:05:11.744 crypto/bcmfs: not in enabled drivers build config 00:05:11.744 crypto/caam_jr: not in enabled drivers build config 00:05:11.744 crypto/ccp: not in enabled drivers build config 00:05:11.744 crypto/cnxk: not in enabled drivers build config 00:05:11.744 crypto/dpaa_sec: not in enabled drivers build config 00:05:11.744 crypto/dpaa2_sec: not in enabled drivers build config 00:05:11.744 crypto/ipsec_mb: not in enabled drivers build config 00:05:11.744 crypto/mlx5: not in enabled drivers build config 00:05:11.744 crypto/mvsam: not in enabled drivers build config 00:05:11.744 crypto/nitrox: not in enabled drivers build config 00:05:11.744 crypto/null: not in enabled drivers build config 00:05:11.744 crypto/octeontx: not in enabled drivers build config 00:05:11.744 crypto/openssl: not in enabled drivers build config 00:05:11.744 crypto/scheduler: not in enabled drivers build config 00:05:11.744 crypto/uadk: not in enabled drivers build config 00:05:11.744 crypto/virtio: not in enabled drivers build config 00:05:11.744 compress/isal: not in enabled drivers build config 00:05:11.744 compress/mlx5: not in enabled drivers build config 00:05:11.744 compress/nitrox: not in enabled drivers build config 00:05:11.744 compress/octeontx: not in enabled drivers build config 00:05:11.744 compress/zlib: not in enabled drivers build config 00:05:11.744 regex/*: missing internal dependency, "regexdev" 00:05:11.744 ml/*: missing internal dependency, "mldev" 00:05:11.744 vdpa/ifc: not in enabled drivers build config 00:05:11.744 vdpa/mlx5: not in enabled drivers build config 00:05:11.744 vdpa/nfp: not in enabled drivers build config 00:05:11.744 vdpa/sfc: not in enabled drivers build config 00:05:11.744 event/*: missing internal dependency, "eventdev" 00:05:11.744 baseband/*: missing internal dependency, "bbdev" 00:05:11.744 gpu/*: missing internal dependency, "gpudev" 00:05:11.744 00:05:11.744 00:05:11.744 Build targets in project: 85 00:05:11.744 00:05:11.744 DPDK 24.03.0 00:05:11.744 00:05:11.744 User defined options 00:05:11.744 buildtype : debug 00:05:11.744 default_library : shared 00:05:11.744 libdir : lib 00:05:11.744 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:11.744 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:11.744 c_link_args : 00:05:11.744 cpu_instruction_set: native 00:05:11.744 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:11.744 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:11.744 enable_docs : false 00:05:11.744 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:11.744 enable_kmods : false 00:05:11.744 max_lcores : 128 00:05:11.744 tests : false 00:05:11.744 00:05:11.744 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:11.744 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:12.003 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:12.003 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:12.003 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:12.003 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:12.003 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:12.003 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:12.003 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:12.003 [8/268] Linking static target lib/librte_kvargs.a 00:05:12.003 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:12.003 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:12.003 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:12.003 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:12.003 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:12.003 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:12.003 [15/268] Linking static target lib/librte_log.a 00:05:12.003 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:12.577 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.837 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:12.837 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:12.837 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:12.837 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:12.837 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:12.837 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:12.837 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:12.837 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:12.837 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:12.837 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:12.837 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:12.837 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:12.837 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:12.837 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:12.837 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:12.837 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:12.837 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:12.837 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:12.837 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:12.837 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:12.837 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:12.837 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:12.837 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:12.837 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:12.837 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:12.837 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:12.837 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:12.837 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:12.837 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:12.837 [47/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:12.837 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:12.837 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:12.837 [50/268] Linking static target lib/librte_telemetry.a 00:05:12.837 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:13.097 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:13.097 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:13.097 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:13.097 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:13.097 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:13.097 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:13.097 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:13.097 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:13.097 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:13.097 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:13.097 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:13.097 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:13.364 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:13.364 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:13.364 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:13.364 [67/268] Linking target lib/librte_log.so.24.1 00:05:13.364 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:13.364 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:13.364 [70/268] Linking static target lib/librte_pci.a 00:05:13.627 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:13.627 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:13.628 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:13.628 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:13.628 [75/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:13.891 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:13.891 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:13.891 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:13.891 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:13.891 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:13.891 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:13.891 [82/268] Linking target lib/librte_kvargs.so.24.1 00:05:13.891 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:13.891 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:13.891 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:13.891 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:13.891 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:13.891 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:13.891 [89/268] Linking static target lib/librte_ring.a 00:05:13.891 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:13.891 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:13.891 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:13.891 [93/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:13.891 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:13.891 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:13.891 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:13.891 [97/268] Linking static target lib/librte_meter.a 00:05:13.891 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:13.891 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:13.891 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:13.891 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:13.891 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:13.891 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:13.891 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:13.891 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:13.891 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:14.154 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.154 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:14.154 [109/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:14.154 [110/268] Linking static target lib/librte_eal.a 00:05:14.154 [111/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.154 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:14.154 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:14.154 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:14.154 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:14.154 [116/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:14.154 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:14.154 [118/268] Linking static target lib/librte_rcu.a 00:05:14.154 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:14.154 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:14.154 [121/268] Linking static target lib/librte_mempool.a 00:05:14.154 [122/268] Linking target lib/librte_telemetry.so.24.1 00:05:14.154 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:14.154 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:14.154 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:14.154 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:14.414 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:14.414 [128/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:14.414 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:14.414 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:14.414 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:14.414 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:14.414 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.414 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:14.414 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.414 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:14.414 [137/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:14.677 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:14.677 [139/268] Linking static target lib/librte_net.a 00:05:14.677 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:14.677 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:14.677 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:14.677 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:14.939 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:14.939 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:14.939 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:14.939 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:14.939 [148/268] Linking static target lib/librte_cmdline.a 00:05:14.939 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:14.939 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:14.939 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:14.939 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:14.939 [153/268] Linking static target lib/librte_timer.a 00:05:14.939 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:14.939 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:14.939 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:14.939 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:15.198 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.198 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:15.198 [160/268] Linking static target lib/librte_dmadev.a 00:05:15.198 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:15.198 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:15.198 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:15.198 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:15.198 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:15.198 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:15.198 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:15.198 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.198 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:15.457 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:15.457 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:15.457 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.457 [173/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:15.457 [174/268] Linking static target lib/librte_power.a 00:05:15.457 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:15.457 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:15.457 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:15.457 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:15.457 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:15.457 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:15.457 [181/268] Linking static target lib/librte_compressdev.a 00:05:15.457 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:15.457 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:15.457 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:15.716 [185/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:15.716 [186/268] Linking static target lib/librte_hash.a 00:05:15.716 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:15.716 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:15.716 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:15.716 [190/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:15.716 [191/268] Linking static target lib/librte_mbuf.a 00:05:15.716 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:15.716 [193/268] Linking static target lib/librte_reorder.a 00:05:15.716 [194/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.716 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:15.716 [196/268] Linking static target lib/librte_security.a 00:05:15.716 [197/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.716 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:15.716 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:15.716 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:15.716 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:15.716 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:15.716 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:15.716 [204/268] Linking static target drivers/librte_bus_vdev.a 00:05:15.974 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:15.974 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:15.974 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:15.974 [208/268] Linking static target drivers/librte_bus_pci.a 00:05:15.974 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:15.974 [210/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.974 [211/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:15.974 [212/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.974 [213/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:15.974 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:15.974 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:15.974 [216/268] Linking static target drivers/librte_mempool_ring.a 00:05:15.974 [217/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.974 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.232 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.232 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.232 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:16.232 [222/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.232 [223/268] Linking static target lib/librte_ethdev.a 00:05:16.232 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:16.490 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:16.490 [226/268] Linking static target lib/librte_cryptodev.a 00:05:17.424 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:18.798 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:20.698 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.698 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:20.698 [231/268] Linking target lib/librte_eal.so.24.1 00:05:20.698 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:20.698 [233/268] Linking target lib/librte_ring.so.24.1 00:05:20.698 [234/268] Linking target lib/librte_timer.so.24.1 00:05:20.698 [235/268] Linking target lib/librte_meter.so.24.1 00:05:20.698 [236/268] Linking target lib/librte_pci.so.24.1 00:05:20.698 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:20.698 [238/268] Linking target lib/librte_dmadev.so.24.1 00:05:20.956 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:20.956 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:20.956 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:20.956 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:20.956 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:20.956 [244/268] Linking target lib/librte_rcu.so.24.1 00:05:20.956 [245/268] Linking target lib/librte_mempool.so.24.1 00:05:20.956 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:20.956 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:20.956 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:21.214 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:21.214 [250/268] Linking target lib/librte_mbuf.so.24.1 00:05:21.214 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:21.214 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:21.214 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:21.214 [254/268] Linking target lib/librte_net.so.24.1 00:05:21.214 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:05:21.472 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:21.472 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:21.472 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:21.472 [259/268] Linking target lib/librte_hash.so.24.1 00:05:21.472 [260/268] Linking target lib/librte_security.so.24.1 00:05:21.472 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:21.751 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:21.751 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:21.751 [264/268] Linking target lib/librte_power.so.24.1 00:05:25.091 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:25.091 [266/268] Linking static target lib/librte_vhost.a 00:05:25.658 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:25.916 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:25.916 INFO: autodetecting backend as ninja 00:05:25.916 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:05:47.834 CC lib/ut_mock/mock.o 00:05:47.834 CC lib/ut/ut.o 00:05:47.834 CC lib/log/log.o 00:05:47.834 CC lib/log/log_flags.o 00:05:47.834 CC lib/log/log_deprecated.o 00:05:47.834 LIB libspdk_ut.a 00:05:47.834 LIB libspdk_ut_mock.a 00:05:47.834 LIB libspdk_log.a 00:05:47.834 SO libspdk_ut_mock.so.6.0 00:05:47.834 SO libspdk_ut.so.2.0 00:05:47.834 SO libspdk_log.so.7.1 00:05:47.834 SYMLINK libspdk_ut_mock.so 00:05:47.834 SYMLINK libspdk_ut.so 00:05:47.834 SYMLINK libspdk_log.so 00:05:47.834 CC lib/ioat/ioat.o 00:05:47.834 CC lib/util/base64.o 00:05:47.834 CC lib/dma/dma.o 00:05:47.834 CXX lib/trace_parser/trace.o 00:05:47.834 CC lib/util/bit_array.o 00:05:47.834 CC lib/util/cpuset.o 00:05:47.834 CC lib/util/crc32.o 00:05:47.834 CC lib/util/crc16.o 00:05:47.834 CC lib/util/crc32c.o 00:05:47.834 CC lib/util/crc32_ieee.o 00:05:47.834 CC lib/util/crc64.o 00:05:47.834 CC lib/util/dif.o 00:05:47.834 CC lib/util/fd.o 00:05:47.834 CC lib/util/fd_group.o 00:05:47.834 CC lib/util/file.o 00:05:47.834 CC lib/util/hexlify.o 00:05:47.834 CC lib/util/iov.o 00:05:47.834 CC lib/util/math.o 00:05:47.834 CC lib/util/net.o 00:05:47.834 CC lib/util/pipe.o 00:05:47.834 CC lib/util/strerror_tls.o 00:05:47.834 CC lib/util/string.o 00:05:47.834 CC lib/util/uuid.o 00:05:47.834 CC lib/util/xor.o 00:05:47.834 CC lib/util/zipf.o 00:05:47.834 CC lib/util/md5.o 00:05:47.834 CC lib/vfio_user/host/vfio_user_pci.o 00:05:47.834 CC lib/vfio_user/host/vfio_user.o 00:05:47.834 LIB libspdk_dma.a 00:05:47.834 SO libspdk_dma.so.5.0 00:05:47.834 SYMLINK libspdk_dma.so 00:05:47.834 LIB libspdk_vfio_user.a 00:05:47.834 LIB libspdk_ioat.a 00:05:47.834 SO libspdk_vfio_user.so.5.0 00:05:47.834 SO libspdk_ioat.so.7.0 00:05:47.834 SYMLINK libspdk_vfio_user.so 00:05:47.834 SYMLINK libspdk_ioat.so 00:05:47.834 LIB libspdk_util.a 00:05:47.834 SO libspdk_util.so.10.1 00:05:47.834 SYMLINK libspdk_util.so 00:05:47.834 CC lib/conf/conf.o 00:05:47.834 CC lib/json/json_parse.o 00:05:47.834 CC lib/idxd/idxd.o 00:05:47.834 CC lib/vmd/vmd.o 00:05:47.834 CC lib/env_dpdk/env.o 00:05:47.834 CC lib/rdma_utils/rdma_utils.o 00:05:47.834 CC lib/idxd/idxd_user.o 00:05:47.834 CC lib/vmd/led.o 00:05:47.834 CC lib/json/json_util.o 00:05:47.834 CC lib/env_dpdk/memory.o 00:05:47.834 CC lib/idxd/idxd_kernel.o 00:05:47.834 CC lib/env_dpdk/pci.o 00:05:47.834 CC lib/json/json_write.o 00:05:47.834 CC lib/env_dpdk/init.o 00:05:47.834 CC lib/env_dpdk/threads.o 00:05:47.834 CC lib/env_dpdk/pci_ioat.o 00:05:47.834 CC lib/env_dpdk/pci_virtio.o 00:05:47.834 CC lib/env_dpdk/pci_vmd.o 00:05:47.834 CC lib/env_dpdk/pci_idxd.o 00:05:47.834 CC lib/env_dpdk/pci_event.o 00:05:47.834 CC lib/env_dpdk/sigbus_handler.o 00:05:47.834 CC lib/env_dpdk/pci_dpdk.o 00:05:47.834 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:47.834 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:47.834 LIB libspdk_trace_parser.a 00:05:47.834 SO libspdk_trace_parser.so.6.0 00:05:47.834 SYMLINK libspdk_trace_parser.so 00:05:47.834 LIB libspdk_conf.a 00:05:47.834 SO libspdk_conf.so.6.0 00:05:47.834 LIB libspdk_json.a 00:05:47.834 SYMLINK libspdk_conf.so 00:05:47.834 SO libspdk_json.so.6.0 00:05:47.834 LIB libspdk_rdma_utils.a 00:05:47.834 SO libspdk_rdma_utils.so.1.0 00:05:47.834 SYMLINK libspdk_json.so 00:05:47.834 SYMLINK libspdk_rdma_utils.so 00:05:47.834 CC lib/jsonrpc/jsonrpc_server.o 00:05:47.834 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:47.834 CC lib/jsonrpc/jsonrpc_client.o 00:05:47.834 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:47.834 CC lib/rdma_provider/common.o 00:05:47.834 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:47.834 LIB libspdk_idxd.a 00:05:47.834 SO libspdk_idxd.so.12.1 00:05:47.834 LIB libspdk_vmd.a 00:05:47.834 SO libspdk_vmd.so.6.0 00:05:47.834 SYMLINK libspdk_idxd.so 00:05:47.834 SYMLINK libspdk_vmd.so 00:05:47.834 LIB libspdk_rdma_provider.a 00:05:47.834 LIB libspdk_jsonrpc.a 00:05:47.834 SO libspdk_rdma_provider.so.7.0 00:05:47.834 SO libspdk_jsonrpc.so.6.0 00:05:47.834 SYMLINK libspdk_rdma_provider.so 00:05:47.834 SYMLINK libspdk_jsonrpc.so 00:05:47.834 CC lib/rpc/rpc.o 00:05:47.834 LIB libspdk_rpc.a 00:05:47.834 SO libspdk_rpc.so.6.0 00:05:47.834 SYMLINK libspdk_rpc.so 00:05:47.834 CC lib/trace/trace.o 00:05:47.834 CC lib/notify/notify.o 00:05:47.834 CC lib/trace/trace_flags.o 00:05:47.834 CC lib/notify/notify_rpc.o 00:05:47.834 CC lib/trace/trace_rpc.o 00:05:47.834 CC lib/keyring/keyring.o 00:05:47.834 CC lib/keyring/keyring_rpc.o 00:05:47.834 LIB libspdk_notify.a 00:05:47.834 SO libspdk_notify.so.6.0 00:05:47.834 SYMLINK libspdk_notify.so 00:05:47.834 LIB libspdk_keyring.a 00:05:47.834 SO libspdk_keyring.so.2.0 00:05:47.834 LIB libspdk_trace.a 00:05:47.834 SO libspdk_trace.so.11.0 00:05:47.834 SYMLINK libspdk_keyring.so 00:05:47.834 SYMLINK libspdk_trace.so 00:05:47.834 CC lib/thread/thread.o 00:05:47.834 CC lib/thread/iobuf.o 00:05:47.834 CC lib/sock/sock.o 00:05:47.834 LIB libspdk_env_dpdk.a 00:05:47.834 CC lib/sock/sock_rpc.o 00:05:48.092 SO libspdk_env_dpdk.so.15.1 00:05:48.092 SYMLINK libspdk_env_dpdk.so 00:05:48.351 LIB libspdk_sock.a 00:05:48.351 SO libspdk_sock.so.10.0 00:05:48.351 SYMLINK libspdk_sock.so 00:05:48.609 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:48.609 CC lib/nvme/nvme_ctrlr.o 00:05:48.609 CC lib/nvme/nvme_fabric.o 00:05:48.609 CC lib/nvme/nvme_ns_cmd.o 00:05:48.609 CC lib/nvme/nvme_ns.o 00:05:48.609 CC lib/nvme/nvme_pcie_common.o 00:05:48.609 CC lib/nvme/nvme_pcie.o 00:05:48.609 CC lib/nvme/nvme_qpair.o 00:05:48.609 CC lib/nvme/nvme.o 00:05:48.609 CC lib/nvme/nvme_quirks.o 00:05:48.609 CC lib/nvme/nvme_transport.o 00:05:48.609 CC lib/nvme/nvme_discovery.o 00:05:48.609 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:48.609 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:48.609 CC lib/nvme/nvme_tcp.o 00:05:48.609 CC lib/nvme/nvme_opal.o 00:05:48.609 CC lib/nvme/nvme_io_msg.o 00:05:48.609 CC lib/nvme/nvme_poll_group.o 00:05:48.609 CC lib/nvme/nvme_zns.o 00:05:48.609 CC lib/nvme/nvme_stubs.o 00:05:48.609 CC lib/nvme/nvme_auth.o 00:05:48.609 CC lib/nvme/nvme_cuse.o 00:05:48.609 CC lib/nvme/nvme_vfio_user.o 00:05:48.609 CC lib/nvme/nvme_rdma.o 00:05:49.545 LIB libspdk_thread.a 00:05:49.545 SO libspdk_thread.so.11.0 00:05:49.545 SYMLINK libspdk_thread.so 00:05:49.803 CC lib/virtio/virtio.o 00:05:49.803 CC lib/accel/accel.o 00:05:49.803 CC lib/vfu_tgt/tgt_endpoint.o 00:05:49.804 CC lib/accel/accel_rpc.o 00:05:49.804 CC lib/vfu_tgt/tgt_rpc.o 00:05:49.804 CC lib/virtio/virtio_vhost_user.o 00:05:49.804 CC lib/accel/accel_sw.o 00:05:49.804 CC lib/virtio/virtio_vfio_user.o 00:05:49.804 CC lib/virtio/virtio_pci.o 00:05:49.804 CC lib/blob/blobstore.o 00:05:49.804 CC lib/blob/request.o 00:05:49.804 CC lib/fsdev/fsdev.o 00:05:49.804 CC lib/init/json_config.o 00:05:49.804 CC lib/blob/zeroes.o 00:05:49.804 CC lib/fsdev/fsdev_io.o 00:05:49.804 CC lib/init/subsystem.o 00:05:49.804 CC lib/blob/blob_bs_dev.o 00:05:49.804 CC lib/fsdev/fsdev_rpc.o 00:05:49.804 CC lib/init/subsystem_rpc.o 00:05:49.804 CC lib/init/rpc.o 00:05:50.062 LIB libspdk_init.a 00:05:50.062 SO libspdk_init.so.6.0 00:05:50.062 SYMLINK libspdk_init.so 00:05:50.320 LIB libspdk_virtio.a 00:05:50.320 LIB libspdk_vfu_tgt.a 00:05:50.320 SO libspdk_virtio.so.7.0 00:05:50.320 SO libspdk_vfu_tgt.so.3.0 00:05:50.320 SYMLINK libspdk_vfu_tgt.so 00:05:50.320 SYMLINK libspdk_virtio.so 00:05:50.320 CC lib/event/app.o 00:05:50.320 CC lib/event/reactor.o 00:05:50.320 CC lib/event/log_rpc.o 00:05:50.320 CC lib/event/app_rpc.o 00:05:50.320 CC lib/event/scheduler_static.o 00:05:50.578 LIB libspdk_fsdev.a 00:05:50.578 SO libspdk_fsdev.so.2.0 00:05:50.578 SYMLINK libspdk_fsdev.so 00:05:50.836 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:50.836 LIB libspdk_event.a 00:05:50.836 SO libspdk_event.so.14.0 00:05:50.836 SYMLINK libspdk_event.so 00:05:51.093 LIB libspdk_accel.a 00:05:51.093 SO libspdk_accel.so.16.0 00:05:51.093 LIB libspdk_nvme.a 00:05:51.093 SYMLINK libspdk_accel.so 00:05:51.093 SO libspdk_nvme.so.15.0 00:05:51.351 CC lib/bdev/bdev.o 00:05:51.351 CC lib/bdev/bdev_rpc.o 00:05:51.351 CC lib/bdev/bdev_zone.o 00:05:51.351 CC lib/bdev/part.o 00:05:51.351 CC lib/bdev/scsi_nvme.o 00:05:51.351 SYMLINK libspdk_nvme.so 00:05:51.351 LIB libspdk_fuse_dispatcher.a 00:05:51.351 SO libspdk_fuse_dispatcher.so.1.0 00:05:51.609 SYMLINK libspdk_fuse_dispatcher.so 00:05:52.985 LIB libspdk_blob.a 00:05:52.985 SO libspdk_blob.so.11.0 00:05:52.985 SYMLINK libspdk_blob.so 00:05:53.244 CC lib/blobfs/blobfs.o 00:05:53.244 CC lib/blobfs/tree.o 00:05:53.244 CC lib/lvol/lvol.o 00:05:53.810 LIB libspdk_bdev.a 00:05:53.810 SO libspdk_bdev.so.17.0 00:05:54.072 SYMLINK libspdk_bdev.so 00:05:54.072 LIB libspdk_blobfs.a 00:05:54.072 SO libspdk_blobfs.so.10.0 00:05:54.072 SYMLINK libspdk_blobfs.so 00:05:54.072 CC lib/nbd/nbd.o 00:05:54.072 CC lib/nvmf/ctrlr.o 00:05:54.072 CC lib/ublk/ublk.o 00:05:54.072 CC lib/nbd/nbd_rpc.o 00:05:54.072 CC lib/ublk/ublk_rpc.o 00:05:54.072 CC lib/scsi/dev.o 00:05:54.072 CC lib/scsi/lun.o 00:05:54.072 CC lib/nvmf/ctrlr_discovery.o 00:05:54.072 CC lib/nvmf/ctrlr_bdev.o 00:05:54.072 CC lib/scsi/port.o 00:05:54.072 CC lib/ftl/ftl_core.o 00:05:54.072 CC lib/nvmf/subsystem.o 00:05:54.072 CC lib/scsi/scsi.o 00:05:54.072 CC lib/nvmf/nvmf.o 00:05:54.072 CC lib/ftl/ftl_init.o 00:05:54.072 CC lib/ftl/ftl_layout.o 00:05:54.072 CC lib/scsi/scsi_bdev.o 00:05:54.072 CC lib/nvmf/nvmf_rpc.o 00:05:54.072 CC lib/scsi/scsi_pr.o 00:05:54.072 CC lib/ftl/ftl_debug.o 00:05:54.072 CC lib/nvmf/transport.o 00:05:54.072 CC lib/ftl/ftl_io.o 00:05:54.072 CC lib/nvmf/tcp.o 00:05:54.072 CC lib/scsi/scsi_rpc.o 00:05:54.072 CC lib/ftl/ftl_sb.o 00:05:54.072 CC lib/scsi/task.o 00:05:54.072 CC lib/nvmf/stubs.o 00:05:54.072 CC lib/ftl/ftl_l2p.o 00:05:54.072 CC lib/ftl/ftl_l2p_flat.o 00:05:54.072 CC lib/nvmf/mdns_server.o 00:05:54.072 CC lib/nvmf/vfio_user.o 00:05:54.072 CC lib/ftl/ftl_band.o 00:05:54.072 CC lib/ftl/ftl_nv_cache.o 00:05:54.072 CC lib/nvmf/rdma.o 00:05:54.072 CC lib/ftl/ftl_band_ops.o 00:05:54.072 CC lib/nvmf/auth.o 00:05:54.072 CC lib/ftl/ftl_writer.o 00:05:54.072 CC lib/ftl/ftl_rq.o 00:05:54.072 CC lib/ftl/ftl_reloc.o 00:05:54.072 CC lib/ftl/ftl_l2p_cache.o 00:05:54.072 CC lib/ftl/ftl_p2l.o 00:05:54.072 CC lib/ftl/ftl_p2l_log.o 00:05:54.072 CC lib/ftl/mngt/ftl_mngt.o 00:05:54.072 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:54.072 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:54.072 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:54.072 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:54.333 LIB libspdk_lvol.a 00:05:54.333 SO libspdk_lvol.so.10.0 00:05:54.333 SYMLINK libspdk_lvol.so 00:05:54.333 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:54.594 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:54.594 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:54.594 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:54.594 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:54.594 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:54.594 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:54.594 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:54.594 CC lib/ftl/utils/ftl_conf.o 00:05:54.594 CC lib/ftl/utils/ftl_md.o 00:05:54.594 CC lib/ftl/utils/ftl_mempool.o 00:05:54.594 CC lib/ftl/utils/ftl_bitmap.o 00:05:54.594 CC lib/ftl/utils/ftl_property.o 00:05:54.594 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:54.594 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:54.594 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:54.595 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:54.595 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:54.858 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:54.858 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:54.858 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:54.858 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:54.858 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:54.858 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:54.858 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:54.858 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:54.858 CC lib/ftl/base/ftl_base_dev.o 00:05:54.858 CC lib/ftl/base/ftl_base_bdev.o 00:05:54.858 CC lib/ftl/ftl_trace.o 00:05:54.858 LIB libspdk_nbd.a 00:05:55.118 SO libspdk_nbd.so.7.0 00:05:55.118 SYMLINK libspdk_nbd.so 00:05:55.118 LIB libspdk_scsi.a 00:05:55.118 SO libspdk_scsi.so.9.0 00:05:55.377 SYMLINK libspdk_scsi.so 00:05:55.377 LIB libspdk_ublk.a 00:05:55.377 SO libspdk_ublk.so.3.0 00:05:55.377 SYMLINK libspdk_ublk.so 00:05:55.377 CC lib/vhost/vhost.o 00:05:55.377 CC lib/iscsi/conn.o 00:05:55.377 CC lib/vhost/vhost_rpc.o 00:05:55.377 CC lib/iscsi/init_grp.o 00:05:55.377 CC lib/vhost/vhost_scsi.o 00:05:55.377 CC lib/iscsi/iscsi.o 00:05:55.377 CC lib/vhost/vhost_blk.o 00:05:55.377 CC lib/iscsi/param.o 00:05:55.377 CC lib/vhost/rte_vhost_user.o 00:05:55.377 CC lib/iscsi/portal_grp.o 00:05:55.377 CC lib/iscsi/tgt_node.o 00:05:55.377 CC lib/iscsi/iscsi_subsystem.o 00:05:55.377 CC lib/iscsi/iscsi_rpc.o 00:05:55.377 CC lib/iscsi/task.o 00:05:55.636 LIB libspdk_ftl.a 00:05:55.894 SO libspdk_ftl.so.9.0 00:05:56.153 SYMLINK libspdk_ftl.so 00:05:56.719 LIB libspdk_vhost.a 00:05:56.719 SO libspdk_vhost.so.8.0 00:05:56.719 LIB libspdk_nvmf.a 00:05:56.719 SYMLINK libspdk_vhost.so 00:05:56.719 SO libspdk_nvmf.so.20.0 00:05:56.978 LIB libspdk_iscsi.a 00:05:56.978 SO libspdk_iscsi.so.8.0 00:05:56.978 SYMLINK libspdk_nvmf.so 00:05:56.978 SYMLINK libspdk_iscsi.so 00:05:57.236 CC module/vfu_device/vfu_virtio.o 00:05:57.236 CC module/env_dpdk/env_dpdk_rpc.o 00:05:57.236 CC module/vfu_device/vfu_virtio_blk.o 00:05:57.236 CC module/vfu_device/vfu_virtio_scsi.o 00:05:57.236 CC module/vfu_device/vfu_virtio_rpc.o 00:05:57.236 CC module/vfu_device/vfu_virtio_fs.o 00:05:57.494 CC module/keyring/file/keyring.o 00:05:57.494 CC module/sock/posix/posix.o 00:05:57.494 CC module/keyring/linux/keyring.o 00:05:57.494 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:57.494 CC module/keyring/file/keyring_rpc.o 00:05:57.494 CC module/scheduler/gscheduler/gscheduler.o 00:05:57.494 CC module/keyring/linux/keyring_rpc.o 00:05:57.494 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:57.495 CC module/blob/bdev/blob_bdev.o 00:05:57.495 CC module/accel/dsa/accel_dsa.o 00:05:57.495 CC module/fsdev/aio/fsdev_aio.o 00:05:57.495 CC module/accel/dsa/accel_dsa_rpc.o 00:05:57.495 CC module/accel/ioat/accel_ioat.o 00:05:57.495 CC module/accel/error/accel_error.o 00:05:57.495 CC module/accel/ioat/accel_ioat_rpc.o 00:05:57.495 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:57.495 CC module/accel/error/accel_error_rpc.o 00:05:57.495 CC module/fsdev/aio/linux_aio_mgr.o 00:05:57.495 CC module/accel/iaa/accel_iaa.o 00:05:57.495 CC module/accel/iaa/accel_iaa_rpc.o 00:05:57.495 LIB libspdk_env_dpdk_rpc.a 00:05:57.495 SO libspdk_env_dpdk_rpc.so.6.0 00:05:57.753 LIB libspdk_keyring_file.a 00:05:57.753 SYMLINK libspdk_env_dpdk_rpc.so 00:05:57.753 LIB libspdk_scheduler_dpdk_governor.a 00:05:57.753 SO libspdk_keyring_file.so.2.0 00:05:57.753 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:57.753 LIB libspdk_keyring_linux.a 00:05:57.753 LIB libspdk_scheduler_dynamic.a 00:05:57.753 LIB libspdk_accel_error.a 00:05:57.753 LIB libspdk_accel_ioat.a 00:05:57.753 LIB libspdk_scheduler_gscheduler.a 00:05:57.753 SO libspdk_keyring_linux.so.1.0 00:05:57.753 LIB libspdk_accel_iaa.a 00:05:57.753 SYMLINK libspdk_keyring_file.so 00:05:57.753 SO libspdk_scheduler_dynamic.so.4.0 00:05:57.753 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:57.753 SO libspdk_accel_error.so.2.0 00:05:57.753 SO libspdk_accel_ioat.so.6.0 00:05:57.753 SO libspdk_scheduler_gscheduler.so.4.0 00:05:57.753 SO libspdk_accel_iaa.so.3.0 00:05:57.753 SYMLINK libspdk_keyring_linux.so 00:05:57.753 SYMLINK libspdk_scheduler_dynamic.so 00:05:57.753 LIB libspdk_blob_bdev.a 00:05:57.753 SYMLINK libspdk_accel_error.so 00:05:57.753 SYMLINK libspdk_scheduler_gscheduler.so 00:05:57.753 LIB libspdk_accel_dsa.a 00:05:57.753 SYMLINK libspdk_accel_ioat.so 00:05:57.753 SYMLINK libspdk_accel_iaa.so 00:05:57.753 SO libspdk_blob_bdev.so.11.0 00:05:57.753 SO libspdk_accel_dsa.so.5.0 00:05:57.753 SYMLINK libspdk_blob_bdev.so 00:05:57.753 SYMLINK libspdk_accel_dsa.so 00:05:58.011 CC module/bdev/malloc/bdev_malloc.o 00:05:58.011 CC module/bdev/gpt/gpt.o 00:05:58.011 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:58.011 CC module/bdev/delay/vbdev_delay.o 00:05:58.011 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:58.011 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:58.011 CC module/bdev/lvol/vbdev_lvol.o 00:05:58.011 CC module/bdev/raid/bdev_raid.o 00:05:58.011 CC module/bdev/gpt/vbdev_gpt.o 00:05:58.011 CC module/bdev/raid/bdev_raid_rpc.o 00:05:58.011 CC module/bdev/passthru/vbdev_passthru.o 00:05:58.011 CC module/bdev/nvme/bdev_nvme.o 00:05:58.011 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:58.011 CC module/bdev/raid/bdev_raid_sb.o 00:05:58.011 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:58.011 CC module/bdev/raid/raid0.o 00:05:58.011 CC module/bdev/nvme/nvme_rpc.o 00:05:58.011 CC module/blobfs/bdev/blobfs_bdev.o 00:05:58.011 CC module/bdev/raid/raid1.o 00:05:58.011 CC module/bdev/nvme/bdev_mdns_client.o 00:05:58.011 CC module/bdev/raid/concat.o 00:05:58.011 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:58.011 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:58.011 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:58.011 CC module/bdev/split/vbdev_split_rpc.o 00:05:58.011 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:58.011 CC module/bdev/nvme/vbdev_opal.o 00:05:58.011 CC module/bdev/split/vbdev_split.o 00:05:58.011 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:58.011 CC module/bdev/null/bdev_null.o 00:05:58.011 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:58.011 CC module/bdev/null/bdev_null_rpc.o 00:05:58.011 CC module/bdev/error/vbdev_error.o 00:05:58.011 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:58.011 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:58.011 CC module/bdev/error/vbdev_error_rpc.o 00:05:58.011 CC module/bdev/iscsi/bdev_iscsi.o 00:05:58.011 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:58.011 CC module/bdev/aio/bdev_aio.o 00:05:58.011 CC module/bdev/ftl/bdev_ftl.o 00:05:58.011 CC module/bdev/aio/bdev_aio_rpc.o 00:05:58.011 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:58.270 LIB libspdk_vfu_device.a 00:05:58.270 SO libspdk_vfu_device.so.3.0 00:05:58.270 LIB libspdk_fsdev_aio.a 00:05:58.270 SO libspdk_fsdev_aio.so.1.0 00:05:58.270 SYMLINK libspdk_vfu_device.so 00:05:58.270 LIB libspdk_sock_posix.a 00:05:58.529 SO libspdk_sock_posix.so.6.0 00:05:58.529 SYMLINK libspdk_fsdev_aio.so 00:05:58.529 LIB libspdk_blobfs_bdev.a 00:05:58.529 SYMLINK libspdk_sock_posix.so 00:05:58.529 SO libspdk_blobfs_bdev.so.6.0 00:05:58.529 LIB libspdk_bdev_gpt.a 00:05:58.529 SO libspdk_bdev_gpt.so.6.0 00:05:58.529 LIB libspdk_bdev_null.a 00:05:58.529 LIB libspdk_bdev_split.a 00:05:58.529 SYMLINK libspdk_blobfs_bdev.so 00:05:58.529 LIB libspdk_bdev_error.a 00:05:58.529 SO libspdk_bdev_null.so.6.0 00:05:58.529 SO libspdk_bdev_split.so.6.0 00:05:58.529 SYMLINK libspdk_bdev_gpt.so 00:05:58.529 SO libspdk_bdev_error.so.6.0 00:05:58.529 LIB libspdk_bdev_aio.a 00:05:58.529 LIB libspdk_bdev_malloc.a 00:05:58.529 LIB libspdk_bdev_passthru.a 00:05:58.787 SYMLINK libspdk_bdev_null.so 00:05:58.787 LIB libspdk_bdev_zone_block.a 00:05:58.787 SYMLINK libspdk_bdev_split.so 00:05:58.787 LIB libspdk_bdev_ftl.a 00:05:58.787 SO libspdk_bdev_aio.so.6.0 00:05:58.787 SO libspdk_bdev_passthru.so.6.0 00:05:58.787 SO libspdk_bdev_malloc.so.6.0 00:05:58.787 SYMLINK libspdk_bdev_error.so 00:05:58.787 SO libspdk_bdev_zone_block.so.6.0 00:05:58.787 SO libspdk_bdev_ftl.so.6.0 00:05:58.787 LIB libspdk_bdev_iscsi.a 00:05:58.787 LIB libspdk_bdev_delay.a 00:05:58.787 SO libspdk_bdev_iscsi.so.6.0 00:05:58.787 SYMLINK libspdk_bdev_aio.so 00:05:58.787 SYMLINK libspdk_bdev_passthru.so 00:05:58.787 SYMLINK libspdk_bdev_malloc.so 00:05:58.787 SYMLINK libspdk_bdev_zone_block.so 00:05:58.787 SO libspdk_bdev_delay.so.6.0 00:05:58.787 SYMLINK libspdk_bdev_ftl.so 00:05:58.787 SYMLINK libspdk_bdev_iscsi.so 00:05:58.787 SYMLINK libspdk_bdev_delay.so 00:05:58.787 LIB libspdk_bdev_lvol.a 00:05:58.787 SO libspdk_bdev_lvol.so.6.0 00:05:58.787 LIB libspdk_bdev_virtio.a 00:05:58.787 SO libspdk_bdev_virtio.so.6.0 00:05:59.047 SYMLINK libspdk_bdev_lvol.so 00:05:59.047 SYMLINK libspdk_bdev_virtio.so 00:05:59.338 LIB libspdk_bdev_raid.a 00:05:59.338 SO libspdk_bdev_raid.so.6.0 00:05:59.338 SYMLINK libspdk_bdev_raid.so 00:06:00.717 LIB libspdk_bdev_nvme.a 00:06:00.975 SO libspdk_bdev_nvme.so.7.1 00:06:00.975 SYMLINK libspdk_bdev_nvme.so 00:06:01.234 CC module/event/subsystems/iobuf/iobuf.o 00:06:01.234 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:01.234 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:01.234 CC module/event/subsystems/vmd/vmd.o 00:06:01.234 CC module/event/subsystems/keyring/keyring.o 00:06:01.234 CC module/event/subsystems/sock/sock.o 00:06:01.234 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:01.234 CC module/event/subsystems/scheduler/scheduler.o 00:06:01.234 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:01.234 CC module/event/subsystems/fsdev/fsdev.o 00:06:01.535 LIB libspdk_event_keyring.a 00:06:01.535 LIB libspdk_event_vhost_blk.a 00:06:01.535 LIB libspdk_event_fsdev.a 00:06:01.535 LIB libspdk_event_vfu_tgt.a 00:06:01.535 LIB libspdk_event_scheduler.a 00:06:01.535 LIB libspdk_event_vmd.a 00:06:01.535 SO libspdk_event_keyring.so.1.0 00:06:01.535 LIB libspdk_event_sock.a 00:06:01.535 SO libspdk_event_vhost_blk.so.3.0 00:06:01.535 SO libspdk_event_scheduler.so.4.0 00:06:01.535 SO libspdk_event_fsdev.so.1.0 00:06:01.535 SO libspdk_event_vfu_tgt.so.3.0 00:06:01.535 LIB libspdk_event_iobuf.a 00:06:01.535 SO libspdk_event_vmd.so.6.0 00:06:01.535 SO libspdk_event_sock.so.5.0 00:06:01.535 SO libspdk_event_iobuf.so.3.0 00:06:01.535 SYMLINK libspdk_event_keyring.so 00:06:01.535 SYMLINK libspdk_event_vhost_blk.so 00:06:01.535 SYMLINK libspdk_event_scheduler.so 00:06:01.535 SYMLINK libspdk_event_vfu_tgt.so 00:06:01.535 SYMLINK libspdk_event_fsdev.so 00:06:01.535 SYMLINK libspdk_event_vmd.so 00:06:01.535 SYMLINK libspdk_event_sock.so 00:06:01.535 SYMLINK libspdk_event_iobuf.so 00:06:01.793 CC module/event/subsystems/accel/accel.o 00:06:01.793 LIB libspdk_event_accel.a 00:06:02.051 SO libspdk_event_accel.so.6.0 00:06:02.051 SYMLINK libspdk_event_accel.so 00:06:02.051 CC module/event/subsystems/bdev/bdev.o 00:06:02.308 LIB libspdk_event_bdev.a 00:06:02.308 SO libspdk_event_bdev.so.6.0 00:06:02.308 SYMLINK libspdk_event_bdev.so 00:06:02.566 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:02.566 CC module/event/subsystems/scsi/scsi.o 00:06:02.566 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:02.566 CC module/event/subsystems/nbd/nbd.o 00:06:02.566 CC module/event/subsystems/ublk/ublk.o 00:06:02.823 LIB libspdk_event_nbd.a 00:06:02.823 LIB libspdk_event_ublk.a 00:06:02.823 LIB libspdk_event_scsi.a 00:06:02.823 SO libspdk_event_nbd.so.6.0 00:06:02.823 SO libspdk_event_ublk.so.3.0 00:06:02.823 SO libspdk_event_scsi.so.6.0 00:06:02.823 SYMLINK libspdk_event_ublk.so 00:06:02.823 SYMLINK libspdk_event_nbd.so 00:06:02.823 SYMLINK libspdk_event_scsi.so 00:06:02.823 LIB libspdk_event_nvmf.a 00:06:02.823 SO libspdk_event_nvmf.so.6.0 00:06:02.823 SYMLINK libspdk_event_nvmf.so 00:06:03.081 CC module/event/subsystems/iscsi/iscsi.o 00:06:03.081 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:03.081 LIB libspdk_event_vhost_scsi.a 00:06:03.081 SO libspdk_event_vhost_scsi.so.3.0 00:06:03.081 LIB libspdk_event_iscsi.a 00:06:03.081 SO libspdk_event_iscsi.so.6.0 00:06:03.081 SYMLINK libspdk_event_vhost_scsi.so 00:06:03.339 SYMLINK libspdk_event_iscsi.so 00:06:03.339 SO libspdk.so.6.0 00:06:03.339 SYMLINK libspdk.so 00:06:03.602 CXX app/trace/trace.o 00:06:03.602 CC app/trace_record/trace_record.o 00:06:03.602 CC app/spdk_top/spdk_top.o 00:06:03.602 CC app/spdk_nvme_discover/discovery_aer.o 00:06:03.602 CC test/rpc_client/rpc_client_test.o 00:06:03.602 CC app/spdk_lspci/spdk_lspci.o 00:06:03.602 CC app/spdk_nvme_identify/identify.o 00:06:03.602 CC app/spdk_nvme_perf/perf.o 00:06:03.602 TEST_HEADER include/spdk/accel.h 00:06:03.602 TEST_HEADER include/spdk/accel_module.h 00:06:03.602 TEST_HEADER include/spdk/assert.h 00:06:03.602 TEST_HEADER include/spdk/barrier.h 00:06:03.602 TEST_HEADER include/spdk/base64.h 00:06:03.602 TEST_HEADER include/spdk/bdev.h 00:06:03.602 TEST_HEADER include/spdk/bdev_module.h 00:06:03.602 TEST_HEADER include/spdk/bdev_zone.h 00:06:03.602 TEST_HEADER include/spdk/bit_pool.h 00:06:03.602 TEST_HEADER include/spdk/bit_array.h 00:06:03.602 TEST_HEADER include/spdk/blob_bdev.h 00:06:03.602 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:03.602 TEST_HEADER include/spdk/blobfs.h 00:06:03.602 TEST_HEADER include/spdk/blob.h 00:06:03.602 TEST_HEADER include/spdk/conf.h 00:06:03.602 TEST_HEADER include/spdk/config.h 00:06:03.602 TEST_HEADER include/spdk/cpuset.h 00:06:03.602 TEST_HEADER include/spdk/crc16.h 00:06:03.602 TEST_HEADER include/spdk/crc32.h 00:06:03.602 TEST_HEADER include/spdk/crc64.h 00:06:03.602 TEST_HEADER include/spdk/dma.h 00:06:03.602 TEST_HEADER include/spdk/dif.h 00:06:03.602 TEST_HEADER include/spdk/endian.h 00:06:03.602 TEST_HEADER include/spdk/env_dpdk.h 00:06:03.602 TEST_HEADER include/spdk/env.h 00:06:03.602 TEST_HEADER include/spdk/event.h 00:06:03.602 TEST_HEADER include/spdk/fd.h 00:06:03.602 TEST_HEADER include/spdk/fd_group.h 00:06:03.602 TEST_HEADER include/spdk/file.h 00:06:03.603 TEST_HEADER include/spdk/fsdev.h 00:06:03.603 TEST_HEADER include/spdk/ftl.h 00:06:03.603 TEST_HEADER include/spdk/fsdev_module.h 00:06:03.603 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:03.603 TEST_HEADER include/spdk/gpt_spec.h 00:06:03.603 TEST_HEADER include/spdk/hexlify.h 00:06:03.603 TEST_HEADER include/spdk/idxd.h 00:06:03.603 TEST_HEADER include/spdk/histogram_data.h 00:06:03.603 TEST_HEADER include/spdk/idxd_spec.h 00:06:03.603 TEST_HEADER include/spdk/ioat.h 00:06:03.603 TEST_HEADER include/spdk/init.h 00:06:03.603 TEST_HEADER include/spdk/ioat_spec.h 00:06:03.603 TEST_HEADER include/spdk/iscsi_spec.h 00:06:03.603 TEST_HEADER include/spdk/json.h 00:06:03.603 TEST_HEADER include/spdk/jsonrpc.h 00:06:03.603 TEST_HEADER include/spdk/keyring.h 00:06:03.603 TEST_HEADER include/spdk/keyring_module.h 00:06:03.603 TEST_HEADER include/spdk/likely.h 00:06:03.603 TEST_HEADER include/spdk/log.h 00:06:03.603 TEST_HEADER include/spdk/lvol.h 00:06:03.603 TEST_HEADER include/spdk/md5.h 00:06:03.603 TEST_HEADER include/spdk/memory.h 00:06:03.603 TEST_HEADER include/spdk/mmio.h 00:06:03.603 TEST_HEADER include/spdk/nbd.h 00:06:03.603 TEST_HEADER include/spdk/net.h 00:06:03.603 TEST_HEADER include/spdk/notify.h 00:06:03.603 TEST_HEADER include/spdk/nvme.h 00:06:03.603 TEST_HEADER include/spdk/nvme_intel.h 00:06:03.603 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:03.603 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:03.603 TEST_HEADER include/spdk/nvme_spec.h 00:06:03.603 TEST_HEADER include/spdk/nvme_zns.h 00:06:03.603 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:03.603 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:03.603 TEST_HEADER include/spdk/nvmf.h 00:06:03.603 TEST_HEADER include/spdk/nvmf_spec.h 00:06:03.603 TEST_HEADER include/spdk/nvmf_transport.h 00:06:03.603 TEST_HEADER include/spdk/opal.h 00:06:03.603 TEST_HEADER include/spdk/pci_ids.h 00:06:03.603 TEST_HEADER include/spdk/opal_spec.h 00:06:03.603 TEST_HEADER include/spdk/pipe.h 00:06:03.603 TEST_HEADER include/spdk/reduce.h 00:06:03.603 TEST_HEADER include/spdk/queue.h 00:06:03.603 TEST_HEADER include/spdk/rpc.h 00:06:03.603 TEST_HEADER include/spdk/scheduler.h 00:06:03.603 TEST_HEADER include/spdk/scsi.h 00:06:03.603 TEST_HEADER include/spdk/scsi_spec.h 00:06:03.603 TEST_HEADER include/spdk/sock.h 00:06:03.603 TEST_HEADER include/spdk/stdinc.h 00:06:03.603 TEST_HEADER include/spdk/string.h 00:06:03.603 TEST_HEADER include/spdk/thread.h 00:06:03.603 TEST_HEADER include/spdk/trace.h 00:06:03.603 TEST_HEADER include/spdk/trace_parser.h 00:06:03.603 TEST_HEADER include/spdk/ublk.h 00:06:03.603 TEST_HEADER include/spdk/tree.h 00:06:03.603 TEST_HEADER include/spdk/util.h 00:06:03.603 TEST_HEADER include/spdk/uuid.h 00:06:03.603 TEST_HEADER include/spdk/version.h 00:06:03.603 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:03.603 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:03.603 TEST_HEADER include/spdk/vhost.h 00:06:03.603 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:03.603 TEST_HEADER include/spdk/vmd.h 00:06:03.603 TEST_HEADER include/spdk/xor.h 00:06:03.603 TEST_HEADER include/spdk/zipf.h 00:06:03.603 CXX test/cpp_headers/accel.o 00:06:03.603 CXX test/cpp_headers/accel_module.o 00:06:03.603 CC app/spdk_dd/spdk_dd.o 00:06:03.603 CXX test/cpp_headers/assert.o 00:06:03.603 CXX test/cpp_headers/barrier.o 00:06:03.603 CXX test/cpp_headers/base64.o 00:06:03.603 CXX test/cpp_headers/bdev.o 00:06:03.603 CXX test/cpp_headers/bdev_module.o 00:06:03.603 CXX test/cpp_headers/bdev_zone.o 00:06:03.603 CXX test/cpp_headers/bit_array.o 00:06:03.603 CXX test/cpp_headers/bit_pool.o 00:06:03.603 CXX test/cpp_headers/blob_bdev.o 00:06:03.603 CXX test/cpp_headers/blobfs_bdev.o 00:06:03.603 CXX test/cpp_headers/blobfs.o 00:06:03.603 CC app/iscsi_tgt/iscsi_tgt.o 00:06:03.603 CXX test/cpp_headers/blob.o 00:06:03.603 CC app/nvmf_tgt/nvmf_main.o 00:06:03.603 CXX test/cpp_headers/conf.o 00:06:03.603 CXX test/cpp_headers/config.o 00:06:03.603 CXX test/cpp_headers/cpuset.o 00:06:03.603 CXX test/cpp_headers/crc16.o 00:06:03.603 CXX test/cpp_headers/crc32.o 00:06:03.603 CC app/spdk_tgt/spdk_tgt.o 00:06:03.603 CC examples/util/zipf/zipf.o 00:06:03.603 CC test/thread/poller_perf/poller_perf.o 00:06:03.603 CC examples/ioat/verify/verify.o 00:06:03.603 CC test/env/vtophys/vtophys.o 00:06:03.603 CC test/app/jsoncat/jsoncat.o 00:06:03.603 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:03.603 CC examples/ioat/perf/perf.o 00:06:03.603 CC test/env/pci/pci_ut.o 00:06:03.603 CC test/app/histogram_perf/histogram_perf.o 00:06:03.603 CC test/env/memory/memory_ut.o 00:06:03.603 CC app/fio/nvme/fio_plugin.o 00:06:03.603 CC test/app/stub/stub.o 00:06:03.864 CC test/dma/test_dma/test_dma.o 00:06:03.864 CC app/fio/bdev/fio_plugin.o 00:06:03.864 CC test/app/bdev_svc/bdev_svc.o 00:06:03.864 LINK spdk_lspci 00:06:03.864 CC test/env/mem_callbacks/mem_callbacks.o 00:06:03.864 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:03.864 LINK rpc_client_test 00:06:03.864 LINK spdk_nvme_discover 00:06:04.128 LINK poller_perf 00:06:04.128 LINK zipf 00:06:04.128 LINK jsoncat 00:06:04.128 LINK nvmf_tgt 00:06:04.128 LINK interrupt_tgt 00:06:04.128 LINK vtophys 00:06:04.128 LINK histogram_perf 00:06:04.128 CXX test/cpp_headers/crc64.o 00:06:04.128 CXX test/cpp_headers/dif.o 00:06:04.128 LINK env_dpdk_post_init 00:06:04.128 CXX test/cpp_headers/dma.o 00:06:04.128 CXX test/cpp_headers/env_dpdk.o 00:06:04.128 CXX test/cpp_headers/endian.o 00:06:04.128 LINK spdk_trace_record 00:06:04.128 CXX test/cpp_headers/env.o 00:06:04.128 CXX test/cpp_headers/event.o 00:06:04.128 CXX test/cpp_headers/fd_group.o 00:06:04.128 CXX test/cpp_headers/fd.o 00:06:04.128 CXX test/cpp_headers/file.o 00:06:04.128 LINK iscsi_tgt 00:06:04.128 CXX test/cpp_headers/fsdev.o 00:06:04.128 CXX test/cpp_headers/fsdev_module.o 00:06:04.128 LINK stub 00:06:04.128 CXX test/cpp_headers/ftl.o 00:06:04.128 CXX test/cpp_headers/fuse_dispatcher.o 00:06:04.128 LINK spdk_tgt 00:06:04.128 CXX test/cpp_headers/gpt_spec.o 00:06:04.128 CXX test/cpp_headers/hexlify.o 00:06:04.128 LINK verify 00:06:04.128 LINK ioat_perf 00:06:04.128 CXX test/cpp_headers/histogram_data.o 00:06:04.128 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:04.128 LINK bdev_svc 00:06:04.128 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:04.390 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:04.390 CXX test/cpp_headers/idxd.o 00:06:04.390 CXX test/cpp_headers/idxd_spec.o 00:06:04.390 CXX test/cpp_headers/init.o 00:06:04.390 LINK spdk_dd 00:06:04.390 CXX test/cpp_headers/ioat.o 00:06:04.390 CXX test/cpp_headers/ioat_spec.o 00:06:04.390 CXX test/cpp_headers/iscsi_spec.o 00:06:04.390 CXX test/cpp_headers/json.o 00:06:04.390 CXX test/cpp_headers/jsonrpc.o 00:06:04.390 LINK spdk_trace 00:06:04.390 CXX test/cpp_headers/keyring.o 00:06:04.390 CXX test/cpp_headers/keyring_module.o 00:06:04.390 CXX test/cpp_headers/likely.o 00:06:04.390 CXX test/cpp_headers/log.o 00:06:04.390 CXX test/cpp_headers/lvol.o 00:06:04.390 CXX test/cpp_headers/md5.o 00:06:04.390 CXX test/cpp_headers/memory.o 00:06:04.655 CXX test/cpp_headers/mmio.o 00:06:04.655 CXX test/cpp_headers/nbd.o 00:06:04.655 LINK pci_ut 00:06:04.655 CXX test/cpp_headers/net.o 00:06:04.655 CXX test/cpp_headers/notify.o 00:06:04.655 CXX test/cpp_headers/nvme.o 00:06:04.655 CXX test/cpp_headers/nvme_intel.o 00:06:04.655 CXX test/cpp_headers/nvme_ocssd.o 00:06:04.655 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:04.655 CC examples/sock/hello_world/hello_sock.o 00:06:04.655 CXX test/cpp_headers/nvme_spec.o 00:06:04.655 CC test/event/reactor/reactor.o 00:06:04.655 CXX test/cpp_headers/nvme_zns.o 00:06:04.655 CXX test/cpp_headers/nvmf_cmd.o 00:06:04.655 CC test/event/event_perf/event_perf.o 00:06:04.655 CC test/event/app_repeat/app_repeat.o 00:06:04.655 CC test/event/reactor_perf/reactor_perf.o 00:06:04.655 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:04.655 CXX test/cpp_headers/nvmf.o 00:06:04.655 CC examples/thread/thread/thread_ex.o 00:06:04.655 LINK nvme_fuzz 00:06:04.655 CC examples/vmd/lsvmd/lsvmd.o 00:06:04.655 CC examples/vmd/led/led.o 00:06:04.655 LINK spdk_bdev 00:06:04.918 CC test/event/scheduler/scheduler.o 00:06:04.918 CC examples/idxd/perf/perf.o 00:06:04.918 CXX test/cpp_headers/nvmf_spec.o 00:06:04.918 LINK test_dma 00:06:04.918 CXX test/cpp_headers/nvmf_transport.o 00:06:04.918 LINK spdk_nvme 00:06:04.918 CXX test/cpp_headers/opal.o 00:06:04.918 CXX test/cpp_headers/opal_spec.o 00:06:04.918 CXX test/cpp_headers/pci_ids.o 00:06:04.918 CXX test/cpp_headers/pipe.o 00:06:04.918 CXX test/cpp_headers/queue.o 00:06:04.918 CXX test/cpp_headers/reduce.o 00:06:04.918 CXX test/cpp_headers/rpc.o 00:06:04.918 CXX test/cpp_headers/scheduler.o 00:06:04.918 CXX test/cpp_headers/scsi.o 00:06:04.918 CXX test/cpp_headers/scsi_spec.o 00:06:04.918 CXX test/cpp_headers/sock.o 00:06:04.918 CXX test/cpp_headers/stdinc.o 00:06:04.918 CXX test/cpp_headers/string.o 00:06:04.918 CXX test/cpp_headers/thread.o 00:06:04.918 CXX test/cpp_headers/trace.o 00:06:04.918 CXX test/cpp_headers/trace_parser.o 00:06:04.918 LINK reactor 00:06:04.918 CXX test/cpp_headers/tree.o 00:06:04.918 LINK event_perf 00:06:05.177 CXX test/cpp_headers/ublk.o 00:06:05.177 LINK lsvmd 00:06:05.177 LINK reactor_perf 00:06:05.177 CXX test/cpp_headers/util.o 00:06:05.177 CXX test/cpp_headers/uuid.o 00:06:05.177 CXX test/cpp_headers/version.o 00:06:05.177 LINK app_repeat 00:06:05.177 CXX test/cpp_headers/vfio_user_pci.o 00:06:05.177 CXX test/cpp_headers/vfio_user_spec.o 00:06:05.177 CXX test/cpp_headers/vhost.o 00:06:05.177 LINK led 00:06:05.177 LINK spdk_nvme_perf 00:06:05.177 CC app/vhost/vhost.o 00:06:05.177 CXX test/cpp_headers/vmd.o 00:06:05.177 CXX test/cpp_headers/xor.o 00:06:05.177 LINK mem_callbacks 00:06:05.177 LINK vhost_fuzz 00:06:05.177 CXX test/cpp_headers/zipf.o 00:06:05.177 LINK hello_sock 00:06:05.177 LINK spdk_nvme_identify 00:06:05.177 LINK thread 00:06:05.177 LINK scheduler 00:06:05.436 LINK spdk_top 00:06:05.436 LINK idxd_perf 00:06:05.436 CC test/nvme/sgl/sgl.o 00:06:05.436 CC test/nvme/e2edp/nvme_dp.o 00:06:05.436 CC test/nvme/aer/aer.o 00:06:05.436 CC test/nvme/reserve/reserve.o 00:06:05.436 CC test/nvme/startup/startup.o 00:06:05.436 CC test/nvme/fused_ordering/fused_ordering.o 00:06:05.436 CC test/nvme/err_injection/err_injection.o 00:06:05.436 CC test/nvme/reset/reset.o 00:06:05.436 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:05.436 CC test/nvme/boot_partition/boot_partition.o 00:06:05.436 CC test/nvme/overhead/overhead.o 00:06:05.436 CC test/nvme/compliance/nvme_compliance.o 00:06:05.436 CC test/nvme/simple_copy/simple_copy.o 00:06:05.436 CC test/nvme/connect_stress/connect_stress.o 00:06:05.436 CC test/nvme/fdp/fdp.o 00:06:05.436 CC test/nvme/cuse/cuse.o 00:06:05.436 LINK vhost 00:06:05.436 CC test/blobfs/mkfs/mkfs.o 00:06:05.436 CC test/accel/dif/dif.o 00:06:05.694 CC test/lvol/esnap/esnap.o 00:06:05.694 CC examples/nvme/hotplug/hotplug.o 00:06:05.694 CC examples/nvme/reconnect/reconnect.o 00:06:05.694 CC examples/nvme/hello_world/hello_world.o 00:06:05.694 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:05.694 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:05.694 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:05.694 CC examples/nvme/abort/abort.o 00:06:05.694 CC examples/nvme/arbitration/arbitration.o 00:06:05.694 LINK connect_stress 00:06:05.694 LINK fused_ordering 00:06:05.694 LINK reserve 00:06:05.694 LINK err_injection 00:06:05.694 LINK doorbell_aers 00:06:05.694 CC examples/accel/perf/accel_perf.o 00:06:05.952 LINK sgl 00:06:05.952 LINK boot_partition 00:06:05.952 LINK startup 00:06:05.952 CC examples/blob/hello_world/hello_blob.o 00:06:05.952 LINK mkfs 00:06:05.952 LINK simple_copy 00:06:05.952 LINK nvme_dp 00:06:05.952 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:05.952 LINK aer 00:06:05.952 CC examples/blob/cli/blobcli.o 00:06:05.952 LINK cmb_copy 00:06:05.952 LINK pmr_persistence 00:06:05.952 LINK reset 00:06:05.952 LINK overhead 00:06:05.952 LINK arbitration 00:06:05.952 LINK memory_ut 00:06:06.210 LINK hello_world 00:06:06.210 LINK fdp 00:06:06.210 LINK nvme_compliance 00:06:06.210 LINK hotplug 00:06:06.210 LINK hello_blob 00:06:06.210 LINK reconnect 00:06:06.210 LINK nvme_manage 00:06:06.210 LINK hello_fsdev 00:06:06.210 LINK abort 00:06:06.468 LINK dif 00:06:06.468 LINK accel_perf 00:06:06.468 LINK blobcli 00:06:06.727 CC test/bdev/bdevio/bdevio.o 00:06:06.727 CC examples/bdev/hello_world/hello_bdev.o 00:06:06.727 LINK iscsi_fuzz 00:06:06.727 CC examples/bdev/bdevperf/bdevperf.o 00:06:06.986 LINK cuse 00:06:06.986 LINK hello_bdev 00:06:07.244 LINK bdevio 00:06:07.502 LINK bdevperf 00:06:08.069 CC examples/nvmf/nvmf/nvmf.o 00:06:08.327 LINK nvmf 00:06:10.857 LINK esnap 00:06:11.116 00:06:11.116 real 1m9.476s 00:06:11.116 user 11m52.894s 00:06:11.116 sys 2m37.633s 00:06:11.116 13:05:08 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:11.116 13:05:08 make -- common/autotest_common.sh@10 -- $ set +x 00:06:11.116 ************************************ 00:06:11.116 END TEST make 00:06:11.116 ************************************ 00:06:11.116 13:05:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:11.116 13:05:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:11.116 13:05:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:11.116 13:05:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.116 13:05:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:11.116 13:05:08 -- pm/common@44 -- $ pid=2974515 00:06:11.116 13:05:08 -- pm/common@50 -- $ kill -TERM 2974515 00:06:11.116 13:05:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.116 13:05:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:11.116 13:05:08 -- pm/common@44 -- $ pid=2974517 00:06:11.116 13:05:08 -- pm/common@50 -- $ kill -TERM 2974517 00:06:11.116 13:05:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.116 13:05:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:11.116 13:05:08 -- pm/common@44 -- $ pid=2974519 00:06:11.116 13:05:08 -- pm/common@50 -- $ kill -TERM 2974519 00:06:11.116 13:05:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.116 13:05:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:11.116 13:05:08 -- pm/common@44 -- $ pid=2974549 00:06:11.116 13:05:08 -- pm/common@50 -- $ sudo -E kill -TERM 2974549 00:06:11.116 13:05:08 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:11.116 13:05:08 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:11.376 13:05:08 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.376 13:05:08 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.376 13:05:08 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.376 13:05:08 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.376 13:05:08 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.376 13:05:08 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.376 13:05:08 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.376 13:05:08 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.376 13:05:08 -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.376 13:05:08 -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.376 13:05:08 -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.376 13:05:08 -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.376 13:05:08 -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.376 13:05:08 -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.376 13:05:08 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.376 13:05:08 -- scripts/common.sh@344 -- # case "$op" in 00:06:11.376 13:05:08 -- scripts/common.sh@345 -- # : 1 00:06:11.376 13:05:08 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.376 13:05:08 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.376 13:05:08 -- scripts/common.sh@365 -- # decimal 1 00:06:11.376 13:05:08 -- scripts/common.sh@353 -- # local d=1 00:06:11.376 13:05:08 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.376 13:05:08 -- scripts/common.sh@355 -- # echo 1 00:06:11.376 13:05:08 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.376 13:05:08 -- scripts/common.sh@366 -- # decimal 2 00:06:11.376 13:05:08 -- scripts/common.sh@353 -- # local d=2 00:06:11.376 13:05:08 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.376 13:05:08 -- scripts/common.sh@355 -- # echo 2 00:06:11.376 13:05:08 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.376 13:05:08 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.376 13:05:08 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.376 13:05:08 -- scripts/common.sh@368 -- # return 0 00:06:11.376 13:05:08 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.376 13:05:08 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.376 --rc genhtml_branch_coverage=1 00:06:11.376 --rc genhtml_function_coverage=1 00:06:11.376 --rc genhtml_legend=1 00:06:11.376 --rc geninfo_all_blocks=1 00:06:11.376 --rc geninfo_unexecuted_blocks=1 00:06:11.376 00:06:11.376 ' 00:06:11.376 13:05:08 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.376 --rc genhtml_branch_coverage=1 00:06:11.376 --rc genhtml_function_coverage=1 00:06:11.376 --rc genhtml_legend=1 00:06:11.376 --rc geninfo_all_blocks=1 00:06:11.376 --rc geninfo_unexecuted_blocks=1 00:06:11.376 00:06:11.376 ' 00:06:11.376 13:05:08 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.376 --rc genhtml_branch_coverage=1 00:06:11.376 --rc genhtml_function_coverage=1 00:06:11.376 --rc genhtml_legend=1 00:06:11.376 --rc geninfo_all_blocks=1 00:06:11.376 --rc geninfo_unexecuted_blocks=1 00:06:11.376 00:06:11.376 ' 00:06:11.376 13:05:08 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.376 --rc genhtml_branch_coverage=1 00:06:11.376 --rc genhtml_function_coverage=1 00:06:11.376 --rc genhtml_legend=1 00:06:11.376 --rc geninfo_all_blocks=1 00:06:11.376 --rc geninfo_unexecuted_blocks=1 00:06:11.376 00:06:11.376 ' 00:06:11.376 13:05:08 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.376 13:05:08 -- nvmf/common.sh@7 -- # uname -s 00:06:11.376 13:05:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.376 13:05:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.376 13:05:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.376 13:05:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.376 13:05:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.376 13:05:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.376 13:05:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.376 13:05:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.376 13:05:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.376 13:05:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.376 13:05:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:11.376 13:05:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:11.376 13:05:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.376 13:05:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.376 13:05:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.376 13:05:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.376 13:05:08 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.376 13:05:08 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:11.376 13:05:08 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.376 13:05:08 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.376 13:05:08 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.376 13:05:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.376 13:05:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.376 13:05:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.376 13:05:08 -- paths/export.sh@5 -- # export PATH 00:06:11.376 13:05:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.377 13:05:08 -- nvmf/common.sh@51 -- # : 0 00:06:11.377 13:05:08 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:11.377 13:05:08 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:11.377 13:05:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.377 13:05:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.377 13:05:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.377 13:05:08 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:11.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:11.377 13:05:08 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:11.377 13:05:08 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:11.377 13:05:08 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:11.377 13:05:08 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:11.377 13:05:08 -- spdk/autotest.sh@32 -- # uname -s 00:06:11.377 13:05:08 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:11.377 13:05:08 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:11.377 13:05:08 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:11.377 13:05:08 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:11.377 13:05:08 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:11.377 13:05:08 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:11.377 13:05:08 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:11.377 13:05:08 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:11.377 13:05:08 -- spdk/autotest.sh@48 -- # udevadm_pid=3033850 00:06:11.377 13:05:08 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:11.377 13:05:08 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:11.377 13:05:08 -- pm/common@17 -- # local monitor 00:06:11.377 13:05:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.377 13:05:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.377 13:05:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.377 13:05:08 -- pm/common@21 -- # date +%s 00:06:11.377 13:05:08 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:11.377 13:05:08 -- pm/common@21 -- # date +%s 00:06:11.377 13:05:08 -- pm/common@25 -- # sleep 1 00:06:11.377 13:05:08 -- pm/common@21 -- # date +%s 00:06:11.377 13:05:08 -- pm/common@21 -- # date +%s 00:06:11.377 13:05:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732536308 00:06:11.377 13:05:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732536308 00:06:11.377 13:05:08 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732536308 00:06:11.377 13:05:08 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732536308 00:06:11.377 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732536308_collect-cpu-load.pm.log 00:06:11.377 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732536308_collect-cpu-temp.pm.log 00:06:11.377 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732536308_collect-vmstat.pm.log 00:06:11.377 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732536308_collect-bmc-pm.bmc.pm.log 00:06:12.314 13:05:09 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:12.314 13:05:09 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:12.314 13:05:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.314 13:05:09 -- common/autotest_common.sh@10 -- # set +x 00:06:12.314 13:05:09 -- spdk/autotest.sh@59 -- # create_test_list 00:06:12.314 13:05:09 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:12.314 13:05:09 -- common/autotest_common.sh@10 -- # set +x 00:06:12.314 13:05:09 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:12.314 13:05:09 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:12.314 13:05:09 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:12.314 13:05:09 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:12.314 13:05:09 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:12.314 13:05:09 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:12.314 13:05:09 -- common/autotest_common.sh@1457 -- # uname 00:06:12.314 13:05:09 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:12.314 13:05:09 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:12.314 13:05:09 -- common/autotest_common.sh@1477 -- # uname 00:06:12.314 13:05:09 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:12.314 13:05:09 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:12.314 13:05:09 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:12.572 lcov: LCOV version 1.15 00:06:12.572 13:05:10 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:30.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:30.648 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:52.568 13:05:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:52.568 13:05:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:52.568 13:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.568 13:05:47 -- spdk/autotest.sh@78 -- # rm -f 00:06:52.568 13:05:47 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:52.568 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:06:52.568 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:06:52.568 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:06:52.568 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:06:52.568 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:06:52.568 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:06:52.568 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:06:52.568 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:06:52.568 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:06:52.568 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:06:52.568 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:06:52.568 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:06:52.568 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:06:52.568 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:06:52.568 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:06:52.568 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:06:52.568 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:06:52.568 13:05:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:52.568 13:05:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:52.568 13:05:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:52.568 13:05:49 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:52.568 13:05:49 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:52.568 13:05:49 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:52.568 13:05:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:52.568 13:05:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:52.568 13:05:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:52.568 13:05:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:52.568 13:05:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:52.568 13:05:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:52.568 13:05:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:52.568 13:05:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:52.568 13:05:49 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:52.568 No valid GPT data, bailing 00:06:52.568 13:05:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:52.568 13:05:49 -- scripts/common.sh@394 -- # pt= 00:06:52.568 13:05:49 -- scripts/common.sh@395 -- # return 1 00:06:52.568 13:05:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:52.568 1+0 records in 00:06:52.568 1+0 records out 00:06:52.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0019121 s, 548 MB/s 00:06:52.568 13:05:49 -- spdk/autotest.sh@105 -- # sync 00:06:52.568 13:05:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:52.568 13:05:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:52.568 13:05:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:54.028 13:05:51 -- spdk/autotest.sh@111 -- # uname -s 00:06:54.028 13:05:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:54.028 13:05:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:54.028 13:05:51 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:54.963 Hugepages 00:06:54.963 node hugesize free / total 00:06:54.963 node0 1048576kB 0 / 0 00:06:54.963 node0 2048kB 0 / 0 00:06:54.963 node1 1048576kB 0 / 0 00:06:54.963 node1 2048kB 0 / 0 00:06:54.963 00:06:54.963 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:54.963 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:54.963 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:54.963 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:54.963 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:54.963 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:54.963 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:54.963 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:54.963 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:54.963 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:54.963 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:54.963 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:54.963 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:55.221 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:55.221 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:55.221 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:55.221 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:55.221 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:55.221 13:05:52 -- spdk/autotest.sh@117 -- # uname -s 00:06:55.221 13:05:52 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:55.221 13:05:52 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:55.221 13:05:52 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:56.600 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:56.600 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:56.600 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:56.600 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:56.600 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:56.600 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:56.600 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:56.600 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:56.600 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:56.600 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:56.600 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:56.600 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:56.600 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:56.600 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:56.600 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:56.600 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:57.538 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:57.538 13:05:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:58.475 13:05:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:58.475 13:05:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:58.475 13:05:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:58.475 13:05:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:58.475 13:05:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:58.475 13:05:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:58.475 13:05:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:58.475 13:05:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:58.475 13:05:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:58.734 13:05:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:58.734 13:05:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:06:58.734 13:05:56 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:59.669 Waiting for block devices as requested 00:06:59.928 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:59.928 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:59.928 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:00.187 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:00.187 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:00.187 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:00.187 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:00.447 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:00.447 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:07:00.708 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:00.708 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:00.708 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:00.708 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:00.968 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:00.968 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:00.968 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:01.226 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:01.227 13:05:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:01.227 13:05:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:07:01.227 13:05:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:07:01.227 13:05:58 -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme 00:07:01.227 13:05:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:07:01.227 13:05:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:07:01.227 13:05:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:07:01.227 13:05:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:01.227 13:05:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:01.227 13:05:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:01.227 13:05:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:01.227 13:05:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:01.227 13:05:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:01.227 13:05:58 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:07:01.227 13:05:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:01.227 13:05:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:01.227 13:05:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:01.227 13:05:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:01.227 13:05:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:01.227 13:05:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:01.227 13:05:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:01.227 13:05:58 -- common/autotest_common.sh@1543 -- # continue 00:07:01.227 13:05:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:01.227 13:05:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.227 13:05:58 -- common/autotest_common.sh@10 -- # set +x 00:07:01.227 13:05:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:01.227 13:05:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.227 13:05:58 -- common/autotest_common.sh@10 -- # set +x 00:07:01.227 13:05:58 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:02.604 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:02.604 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:02.604 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:02.604 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:02.604 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:02.604 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:02.604 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:02.604 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:02.604 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:02.604 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:02.604 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:02.604 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:02.604 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:02.604 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:02.604 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:02.604 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:03.538 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:07:03.795 13:06:01 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:03.795 13:06:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.795 13:06:01 -- common/autotest_common.sh@10 -- # set +x 00:07:03.795 13:06:01 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:03.795 13:06:01 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:03.796 13:06:01 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:03.796 13:06:01 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:03.796 13:06:01 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:03.796 13:06:01 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:03.796 13:06:01 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:03.796 13:06:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:03.796 13:06:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:03.796 13:06:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:03.796 13:06:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:03.796 13:06:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:03.796 13:06:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:03.796 13:06:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:07:03.796 13:06:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:07:03.796 13:06:01 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:03.796 13:06:01 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:07:03.796 13:06:01 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:07:03.796 13:06:01 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:03.796 13:06:01 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:07:03.796 13:06:01 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:07:03.796 13:06:01 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0 00:07:03.796 13:06:01 -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]] 00:07:03.796 13:06:01 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3044433 00:07:03.796 13:06:01 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.796 13:06:01 -- common/autotest_common.sh@1585 -- # waitforlisten 3044433 00:07:03.796 13:06:01 -- common/autotest_common.sh@835 -- # '[' -z 3044433 ']' 00:07:03.796 13:06:01 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.796 13:06:01 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.796 13:06:01 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.796 13:06:01 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.796 13:06:01 -- common/autotest_common.sh@10 -- # set +x 00:07:03.796 [2024-11-25 13:06:01.437321] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:03.796 [2024-11-25 13:06:01.437416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044433 ] 00:07:04.055 [2024-11-25 13:06:01.505956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.055 [2024-11-25 13:06:01.567838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.313 13:06:01 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.313 13:06:01 -- common/autotest_common.sh@868 -- # return 0 00:07:04.313 13:06:01 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:07:04.313 13:06:01 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:07:04.313 13:06:01 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:07:07.598 nvme0n1 00:07:07.598 13:06:04 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:07.598 [2024-11-25 13:06:05.202645] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:07:07.598 [2024-11-25 13:06:05.202691] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:07:07.598 request: 00:07:07.598 { 00:07:07.598 "nvme_ctrlr_name": "nvme0", 00:07:07.598 "password": "test", 00:07:07.598 "method": "bdev_nvme_opal_revert", 00:07:07.598 "req_id": 1 00:07:07.598 } 00:07:07.598 Got JSON-RPC error response 00:07:07.598 response: 00:07:07.598 { 00:07:07.598 "code": -32603, 00:07:07.598 "message": "Internal error" 00:07:07.598 } 00:07:07.598 13:06:05 -- common/autotest_common.sh@1591 -- # true 00:07:07.598 13:06:05 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:07:07.598 13:06:05 -- common/autotest_common.sh@1595 -- # killprocess 3044433 00:07:07.598 13:06:05 -- common/autotest_common.sh@954 -- # '[' -z 3044433 ']' 00:07:07.598 13:06:05 -- common/autotest_common.sh@958 -- # kill -0 3044433 00:07:07.598 13:06:05 -- common/autotest_common.sh@959 -- # uname 00:07:07.598 13:06:05 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.598 13:06:05 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044433 00:07:07.856 13:06:05 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.856 13:06:05 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.856 13:06:05 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044433' 00:07:07.856 killing process with pid 3044433 00:07:07.856 13:06:05 -- common/autotest_common.sh@973 -- # kill 3044433 00:07:07.856 13:06:05 -- common/autotest_common.sh@978 -- # wait 3044433 00:07:09.755 13:06:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:09.755 13:06:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:09.755 13:06:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:09.755 13:06:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:09.755 13:06:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:09.755 13:06:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.755 13:06:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.755 13:06:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:09.755 13:06:07 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:09.755 13:06:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.755 13:06:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.755 13:06:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.755 ************************************ 00:07:09.755 START TEST env 00:07:09.755 ************************************ 00:07:09.755 13:06:07 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:09.755 * Looking for test storage... 00:07:09.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:09.755 13:06:07 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.755 13:06:07 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.755 13:06:07 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.755 13:06:07 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.755 13:06:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.755 13:06:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.755 13:06:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.755 13:06:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.755 13:06:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.755 13:06:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.755 13:06:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.755 13:06:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.755 13:06:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.755 13:06:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.755 13:06:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.755 13:06:07 env -- scripts/common.sh@344 -- # case "$op" in 00:07:09.755 13:06:07 env -- scripts/common.sh@345 -- # : 1 00:07:09.755 13:06:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.755 13:06:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.755 13:06:07 env -- scripts/common.sh@365 -- # decimal 1 00:07:09.755 13:06:07 env -- scripts/common.sh@353 -- # local d=1 00:07:09.755 13:06:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.756 13:06:07 env -- scripts/common.sh@355 -- # echo 1 00:07:09.756 13:06:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.756 13:06:07 env -- scripts/common.sh@366 -- # decimal 2 00:07:09.756 13:06:07 env -- scripts/common.sh@353 -- # local d=2 00:07:09.756 13:06:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.756 13:06:07 env -- scripts/common.sh@355 -- # echo 2 00:07:09.756 13:06:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.756 13:06:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.756 13:06:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.756 13:06:07 env -- scripts/common.sh@368 -- # return 0 00:07:09.756 13:06:07 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.756 13:06:07 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.756 --rc genhtml_branch_coverage=1 00:07:09.756 --rc genhtml_function_coverage=1 00:07:09.756 --rc genhtml_legend=1 00:07:09.756 --rc geninfo_all_blocks=1 00:07:09.756 --rc geninfo_unexecuted_blocks=1 00:07:09.756 00:07:09.756 ' 00:07:09.756 13:06:07 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.756 --rc genhtml_branch_coverage=1 00:07:09.756 --rc genhtml_function_coverage=1 00:07:09.756 --rc genhtml_legend=1 00:07:09.756 --rc geninfo_all_blocks=1 00:07:09.756 --rc geninfo_unexecuted_blocks=1 00:07:09.756 00:07:09.756 ' 00:07:09.756 13:06:07 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.756 --rc genhtml_branch_coverage=1 00:07:09.756 --rc genhtml_function_coverage=1 00:07:09.756 --rc genhtml_legend=1 00:07:09.756 --rc geninfo_all_blocks=1 00:07:09.756 --rc geninfo_unexecuted_blocks=1 00:07:09.756 00:07:09.756 ' 00:07:09.756 13:06:07 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.756 --rc genhtml_branch_coverage=1 00:07:09.756 --rc genhtml_function_coverage=1 00:07:09.756 --rc genhtml_legend=1 00:07:09.756 --rc geninfo_all_blocks=1 00:07:09.756 --rc geninfo_unexecuted_blocks=1 00:07:09.756 00:07:09.756 ' 00:07:09.756 13:06:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:09.756 13:06:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.756 13:06:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.756 13:06:07 env -- common/autotest_common.sh@10 -- # set +x 00:07:09.756 ************************************ 00:07:09.756 START TEST env_memory 00:07:09.756 ************************************ 00:07:09.756 13:06:07 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:09.756 00:07:09.756 00:07:09.756 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.756 http://cunit.sourceforge.net/ 00:07:09.756 00:07:09.756 00:07:09.756 Suite: memory 00:07:09.756 Test: alloc and free memory map ...[2024-11-25 13:06:07.267097] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:09.756 passed 00:07:09.756 Test: mem map translation ...[2024-11-25 13:06:07.289472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:09.756 [2024-11-25 13:06:07.289495] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:09.756 [2024-11-25 13:06:07.289542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:09.756 [2024-11-25 13:06:07.289555] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:09.756 passed 00:07:09.756 Test: mem map registration ...[2024-11-25 13:06:07.337448] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:09.756 [2024-11-25 13:06:07.337472] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:09.756 passed 00:07:09.756 Test: mem map adjacent registrations ...passed 00:07:09.756 00:07:09.756 Run Summary: Type Total Ran Passed Failed Inactive 00:07:09.756 suites 1 1 n/a 0 0 00:07:09.756 tests 4 4 4 0 0 00:07:09.756 asserts 152 152 152 0 n/a 00:07:09.756 00:07:09.756 Elapsed time = 0.159 seconds 00:07:09.756 00:07:09.756 real 0m0.168s 00:07:09.756 user 0m0.160s 00:07:09.756 sys 0m0.008s 00:07:09.756 13:06:07 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.756 13:06:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:09.756 ************************************ 00:07:09.756 END TEST env_memory 00:07:09.756 ************************************ 00:07:10.013 13:06:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:10.013 13:06:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.013 13:06:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.013 13:06:07 env -- common/autotest_common.sh@10 -- # set +x 00:07:10.013 ************************************ 00:07:10.013 START TEST env_vtophys 00:07:10.013 ************************************ 00:07:10.013 13:06:07 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:10.013 EAL: lib.eal log level changed from notice to debug 00:07:10.013 EAL: Detected lcore 0 as core 0 on socket 0 00:07:10.013 EAL: Detected lcore 1 as core 1 on socket 0 00:07:10.013 EAL: Detected lcore 2 as core 2 on socket 0 00:07:10.013 EAL: Detected lcore 3 as core 3 on socket 0 00:07:10.013 EAL: Detected lcore 4 as core 4 on socket 0 00:07:10.013 EAL: Detected lcore 5 as core 5 on socket 0 00:07:10.013 EAL: Detected lcore 6 as core 8 on socket 0 00:07:10.013 EAL: Detected lcore 7 as core 9 on socket 0 00:07:10.013 EAL: Detected lcore 8 as core 10 on socket 0 00:07:10.013 EAL: Detected lcore 9 as core 11 on socket 0 00:07:10.013 EAL: Detected lcore 10 as core 12 on socket 0 00:07:10.013 EAL: Detected lcore 11 as core 13 on socket 0 00:07:10.013 EAL: Detected lcore 12 as core 0 on socket 1 00:07:10.013 EAL: Detected lcore 13 as core 1 on socket 1 00:07:10.013 EAL: Detected lcore 14 as core 2 on socket 1 00:07:10.013 EAL: Detected lcore 15 as core 3 on socket 1 00:07:10.013 EAL: Detected lcore 16 as core 4 on socket 1 00:07:10.013 EAL: Detected lcore 17 as core 5 on socket 1 00:07:10.013 EAL: Detected lcore 18 as core 8 on socket 1 00:07:10.013 EAL: Detected lcore 19 as core 9 on socket 1 00:07:10.013 EAL: Detected lcore 20 as core 10 on socket 1 00:07:10.013 EAL: Detected lcore 21 as core 11 on socket 1 00:07:10.013 EAL: Detected lcore 22 as core 12 on socket 1 00:07:10.013 EAL: Detected lcore 23 as core 13 on socket 1 00:07:10.013 EAL: Detected lcore 24 as core 0 on socket 0 00:07:10.013 EAL: Detected lcore 25 as core 1 on socket 0 00:07:10.013 EAL: Detected lcore 26 as core 2 on socket 0 00:07:10.013 EAL: Detected lcore 27 as core 3 on socket 0 00:07:10.013 EAL: Detected lcore 28 as core 4 on socket 0 00:07:10.013 EAL: Detected lcore 29 as core 5 on socket 0 00:07:10.013 EAL: Detected lcore 30 as core 8 on socket 0 00:07:10.013 EAL: Detected lcore 31 as core 9 on socket 0 00:07:10.013 EAL: Detected lcore 32 as core 10 on socket 0 00:07:10.013 EAL: Detected lcore 33 as core 11 on socket 0 00:07:10.013 EAL: Detected lcore 34 as core 12 on socket 0 00:07:10.013 EAL: Detected lcore 35 as core 13 on socket 0 00:07:10.013 EAL: Detected lcore 36 as core 0 on socket 1 00:07:10.013 EAL: Detected lcore 37 as core 1 on socket 1 00:07:10.013 EAL: Detected lcore 38 as core 2 on socket 1 00:07:10.013 EAL: Detected lcore 39 as core 3 on socket 1 00:07:10.013 EAL: Detected lcore 40 as core 4 on socket 1 00:07:10.013 EAL: Detected lcore 41 as core 5 on socket 1 00:07:10.013 EAL: Detected lcore 42 as core 8 on socket 1 00:07:10.013 EAL: Detected lcore 43 as core 9 on socket 1 00:07:10.013 EAL: Detected lcore 44 as core 10 on socket 1 00:07:10.013 EAL: Detected lcore 45 as core 11 on socket 1 00:07:10.013 EAL: Detected lcore 46 as core 12 on socket 1 00:07:10.013 EAL: Detected lcore 47 as core 13 on socket 1 00:07:10.013 EAL: Maximum logical cores by configuration: 128 00:07:10.013 EAL: Detected CPU lcores: 48 00:07:10.013 EAL: Detected NUMA nodes: 2 00:07:10.013 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:10.013 EAL: Detected shared linkage of DPDK 00:07:10.013 EAL: No shared files mode enabled, IPC will be disabled 00:07:10.013 EAL: Bus pci wants IOVA as 'DC' 00:07:10.013 EAL: Buses did not request a specific IOVA mode. 00:07:10.013 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:10.013 EAL: Selected IOVA mode 'VA' 00:07:10.013 EAL: Probing VFIO support... 00:07:10.013 EAL: IOMMU type 1 (Type 1) is supported 00:07:10.013 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:10.013 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:10.013 EAL: VFIO support initialized 00:07:10.013 EAL: Ask a virtual area of 0x2e000 bytes 00:07:10.013 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:10.013 EAL: Setting up physically contiguous memory... 00:07:10.013 EAL: Setting maximum number of open files to 524288 00:07:10.013 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:10.013 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:10.013 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:10.013 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.013 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:10.013 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.013 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.013 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:10.013 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:10.013 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.013 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:10.013 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.013 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.013 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:10.013 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:10.013 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.013 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:10.013 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.013 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.013 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:10.013 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:10.013 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.013 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:10.013 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:10.013 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.013 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:10.013 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:10.013 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:10.013 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.013 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:10.013 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:10.013 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.013 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:10.013 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:10.013 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.013 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:10.013 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:10.013 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.013 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:10.013 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:10.013 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.013 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:10.013 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:10.013 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.013 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:10.013 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:10.013 EAL: Ask a virtual area of 0x61000 bytes 00:07:10.013 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:10.013 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:10.013 EAL: Ask a virtual area of 0x400000000 bytes 00:07:10.013 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:10.013 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:10.013 EAL: Hugepages will be freed exactly as allocated. 00:07:10.013 EAL: No shared files mode enabled, IPC is disabled 00:07:10.013 EAL: No shared files mode enabled, IPC is disabled 00:07:10.013 EAL: TSC frequency is ~2700000 KHz 00:07:10.013 EAL: Main lcore 0 is ready (tid=7f593ed8aa00;cpuset=[0]) 00:07:10.013 EAL: Trying to obtain current memory policy. 00:07:10.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.014 EAL: Restoring previous memory policy: 0 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was expanded by 2MB 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:10.014 EAL: Mem event callback 'spdk:(nil)' registered 00:07:10.014 00:07:10.014 00:07:10.014 CUnit - A unit testing framework for C - Version 2.1-3 00:07:10.014 http://cunit.sourceforge.net/ 00:07:10.014 00:07:10.014 00:07:10.014 Suite: components_suite 00:07:10.014 Test: vtophys_malloc_test ...passed 00:07:10.014 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:10.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.014 EAL: Restoring previous memory policy: 4 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was expanded by 4MB 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was shrunk by 4MB 00:07:10.014 EAL: Trying to obtain current memory policy. 00:07:10.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.014 EAL: Restoring previous memory policy: 4 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was expanded by 6MB 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was shrunk by 6MB 00:07:10.014 EAL: Trying to obtain current memory policy. 00:07:10.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.014 EAL: Restoring previous memory policy: 4 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was expanded by 10MB 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was shrunk by 10MB 00:07:10.014 EAL: Trying to obtain current memory policy. 00:07:10.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.014 EAL: Restoring previous memory policy: 4 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was expanded by 18MB 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was shrunk by 18MB 00:07:10.014 EAL: Trying to obtain current memory policy. 00:07:10.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.014 EAL: Restoring previous memory policy: 4 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was expanded by 34MB 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was shrunk by 34MB 00:07:10.014 EAL: Trying to obtain current memory policy. 00:07:10.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.014 EAL: Restoring previous memory policy: 4 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was expanded by 66MB 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was shrunk by 66MB 00:07:10.014 EAL: Trying to obtain current memory policy. 00:07:10.014 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.014 EAL: Restoring previous memory policy: 4 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.014 EAL: request: mp_malloc_sync 00:07:10.014 EAL: No shared files mode enabled, IPC is disabled 00:07:10.014 EAL: Heap on socket 0 was expanded by 130MB 00:07:10.014 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.271 EAL: request: mp_malloc_sync 00:07:10.271 EAL: No shared files mode enabled, IPC is disabled 00:07:10.271 EAL: Heap on socket 0 was shrunk by 130MB 00:07:10.271 EAL: Trying to obtain current memory policy. 00:07:10.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.271 EAL: Restoring previous memory policy: 4 00:07:10.272 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.272 EAL: request: mp_malloc_sync 00:07:10.272 EAL: No shared files mode enabled, IPC is disabled 00:07:10.272 EAL: Heap on socket 0 was expanded by 258MB 00:07:10.272 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.272 EAL: request: mp_malloc_sync 00:07:10.272 EAL: No shared files mode enabled, IPC is disabled 00:07:10.272 EAL: Heap on socket 0 was shrunk by 258MB 00:07:10.272 EAL: Trying to obtain current memory policy. 00:07:10.272 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:10.530 EAL: Restoring previous memory policy: 4 00:07:10.530 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.530 EAL: request: mp_malloc_sync 00:07:10.530 EAL: No shared files mode enabled, IPC is disabled 00:07:10.530 EAL: Heap on socket 0 was expanded by 514MB 00:07:10.530 EAL: Calling mem event callback 'spdk:(nil)' 00:07:10.788 EAL: request: mp_malloc_sync 00:07:10.788 EAL: No shared files mode enabled, IPC is disabled 00:07:10.788 EAL: Heap on socket 0 was shrunk by 514MB 00:07:10.788 EAL: Trying to obtain current memory policy. 00:07:10.788 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:11.046 EAL: Restoring previous memory policy: 4 00:07:11.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.046 EAL: request: mp_malloc_sync 00:07:11.046 EAL: No shared files mode enabled, IPC is disabled 00:07:11.046 EAL: Heap on socket 0 was expanded by 1026MB 00:07:11.046 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.304 EAL: request: mp_malloc_sync 00:07:11.304 EAL: No shared files mode enabled, IPC is disabled 00:07:11.304 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:11.304 passed 00:07:11.304 00:07:11.304 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.304 suites 1 1 n/a 0 0 00:07:11.304 tests 2 2 2 0 0 00:07:11.304 asserts 497 497 497 0 n/a 00:07:11.304 00:07:11.304 Elapsed time = 1.317 seconds 00:07:11.304 EAL: Calling mem event callback 'spdk:(nil)' 00:07:11.304 EAL: request: mp_malloc_sync 00:07:11.304 EAL: No shared files mode enabled, IPC is disabled 00:07:11.304 EAL: Heap on socket 0 was shrunk by 2MB 00:07:11.304 EAL: No shared files mode enabled, IPC is disabled 00:07:11.304 EAL: No shared files mode enabled, IPC is disabled 00:07:11.304 EAL: No shared files mode enabled, IPC is disabled 00:07:11.304 00:07:11.304 real 0m1.457s 00:07:11.304 user 0m0.832s 00:07:11.304 sys 0m0.575s 00:07:11.304 13:06:08 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.304 13:06:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:11.304 ************************************ 00:07:11.304 END TEST env_vtophys 00:07:11.304 ************************************ 00:07:11.304 13:06:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:11.304 13:06:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.304 13:06:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.304 13:06:08 env -- common/autotest_common.sh@10 -- # set +x 00:07:11.304 ************************************ 00:07:11.304 START TEST env_pci 00:07:11.304 ************************************ 00:07:11.304 13:06:08 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:11.563 00:07:11.563 00:07:11.563 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.563 http://cunit.sourceforge.net/ 00:07:11.563 00:07:11.563 00:07:11.563 Suite: pci 00:07:11.563 Test: pci_hook ...[2024-11-25 13:06:08.966860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3045864 has claimed it 00:07:11.563 EAL: Cannot find device (10000:00:01.0) 00:07:11.563 EAL: Failed to attach device on primary process 00:07:11.563 passed 00:07:11.563 00:07:11.563 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.563 suites 1 1 n/a 0 0 00:07:11.563 tests 1 1 1 0 0 00:07:11.563 asserts 25 25 25 0 n/a 00:07:11.563 00:07:11.563 Elapsed time = 0.021 seconds 00:07:11.563 00:07:11.563 real 0m0.035s 00:07:11.563 user 0m0.013s 00:07:11.563 sys 0m0.021s 00:07:11.563 13:06:08 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.563 13:06:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:11.563 ************************************ 00:07:11.563 END TEST env_pci 00:07:11.563 ************************************ 00:07:11.563 13:06:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:11.563 13:06:09 env -- env/env.sh@15 -- # uname 00:07:11.563 13:06:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:11.563 13:06:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:11.563 13:06:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:11.563 13:06:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:11.563 13:06:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.563 13:06:09 env -- common/autotest_common.sh@10 -- # set +x 00:07:11.563 ************************************ 00:07:11.563 START TEST env_dpdk_post_init 00:07:11.563 ************************************ 00:07:11.563 13:06:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:11.563 EAL: Detected CPU lcores: 48 00:07:11.563 EAL: Detected NUMA nodes: 2 00:07:11.563 EAL: Detected shared linkage of DPDK 00:07:11.563 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:11.563 EAL: Selected IOVA mode 'VA' 00:07:11.563 EAL: VFIO support initialized 00:07:11.563 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:11.563 EAL: Using IOMMU type 1 (Type 1) 00:07:11.563 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:07:11.563 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:07:11.563 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:07:11.563 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:07:11.563 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:07:11.563 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:07:11.821 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:07:11.821 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:07:12.411 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:07:12.411 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:07:12.411 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:07:12.411 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:07:12.411 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:07:12.411 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:07:12.411 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:07:12.669 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:07:12.669 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:15.949 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:07:15.949 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:07:15.949 Starting DPDK initialization... 00:07:15.949 Starting SPDK post initialization... 00:07:15.949 SPDK NVMe probe 00:07:15.949 Attaching to 0000:0b:00.0 00:07:15.949 Attached to 0000:0b:00.0 00:07:15.949 Cleaning up... 00:07:15.949 00:07:15.949 real 0m4.382s 00:07:15.949 user 0m3.025s 00:07:15.949 sys 0m0.420s 00:07:15.949 13:06:13 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.949 13:06:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:15.949 ************************************ 00:07:15.949 END TEST env_dpdk_post_init 00:07:15.949 ************************************ 00:07:15.949 13:06:13 env -- env/env.sh@26 -- # uname 00:07:15.949 13:06:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:15.949 13:06:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:15.949 13:06:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.949 13:06:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.949 13:06:13 env -- common/autotest_common.sh@10 -- # set +x 00:07:15.949 ************************************ 00:07:15.949 START TEST env_mem_callbacks 00:07:15.949 ************************************ 00:07:15.949 13:06:13 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:15.949 EAL: Detected CPU lcores: 48 00:07:15.949 EAL: Detected NUMA nodes: 2 00:07:15.949 EAL: Detected shared linkage of DPDK 00:07:15.949 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:15.949 EAL: Selected IOVA mode 'VA' 00:07:15.949 EAL: VFIO support initialized 00:07:15.949 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:15.949 00:07:15.949 00:07:15.949 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.949 http://cunit.sourceforge.net/ 00:07:15.949 00:07:15.949 00:07:15.949 Suite: memory 00:07:15.949 Test: test ... 00:07:15.949 register 0x200000200000 2097152 00:07:15.949 malloc 3145728 00:07:15.949 register 0x200000400000 4194304 00:07:15.949 buf 0x200000500000 len 3145728 PASSED 00:07:15.949 malloc 64 00:07:15.949 buf 0x2000004fff40 len 64 PASSED 00:07:15.949 malloc 4194304 00:07:15.949 register 0x200000800000 6291456 00:07:15.949 buf 0x200000a00000 len 4194304 PASSED 00:07:15.949 free 0x200000500000 3145728 00:07:15.949 free 0x2000004fff40 64 00:07:15.949 unregister 0x200000400000 4194304 PASSED 00:07:15.949 free 0x200000a00000 4194304 00:07:15.949 unregister 0x200000800000 6291456 PASSED 00:07:15.949 malloc 8388608 00:07:15.949 register 0x200000400000 10485760 00:07:15.949 buf 0x200000600000 len 8388608 PASSED 00:07:15.949 free 0x200000600000 8388608 00:07:15.949 unregister 0x200000400000 10485760 PASSED 00:07:15.949 passed 00:07:15.949 00:07:15.949 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.949 suites 1 1 n/a 0 0 00:07:15.949 tests 1 1 1 0 0 00:07:15.949 asserts 15 15 15 0 n/a 00:07:15.949 00:07:15.949 Elapsed time = 0.004 seconds 00:07:15.949 00:07:15.949 real 0m0.050s 00:07:15.949 user 0m0.011s 00:07:15.949 sys 0m0.039s 00:07:15.949 13:06:13 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.949 13:06:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:15.949 ************************************ 00:07:15.949 END TEST env_mem_callbacks 00:07:15.949 ************************************ 00:07:15.949 00:07:15.949 real 0m6.486s 00:07:15.949 user 0m4.235s 00:07:15.949 sys 0m1.285s 00:07:15.949 13:06:13 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.949 13:06:13 env -- common/autotest_common.sh@10 -- # set +x 00:07:15.950 ************************************ 00:07:15.950 END TEST env 00:07:15.950 ************************************ 00:07:15.950 13:06:13 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:15.950 13:06:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.950 13:06:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.950 13:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:15.950 ************************************ 00:07:15.950 START TEST rpc 00:07:15.950 ************************************ 00:07:15.950 13:06:13 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:16.207 * Looking for test storage... 00:07:16.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:16.207 13:06:13 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.207 13:06:13 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.207 13:06:13 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.207 13:06:13 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.207 13:06:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.207 13:06:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.207 13:06:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.207 13:06:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.207 13:06:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.207 13:06:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.207 13:06:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.207 13:06:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.207 13:06:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.207 13:06:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.207 13:06:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.207 13:06:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:16.207 13:06:13 rpc -- scripts/common.sh@345 -- # : 1 00:07:16.207 13:06:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.207 13:06:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.207 13:06:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:16.207 13:06:13 rpc -- scripts/common.sh@353 -- # local d=1 00:07:16.207 13:06:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.207 13:06:13 rpc -- scripts/common.sh@355 -- # echo 1 00:07:16.207 13:06:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.207 13:06:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:16.207 13:06:13 rpc -- scripts/common.sh@353 -- # local d=2 00:07:16.207 13:06:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.207 13:06:13 rpc -- scripts/common.sh@355 -- # echo 2 00:07:16.207 13:06:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.207 13:06:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.207 13:06:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.207 13:06:13 rpc -- scripts/common.sh@368 -- # return 0 00:07:16.207 13:06:13 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.207 13:06:13 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.207 --rc genhtml_branch_coverage=1 00:07:16.207 --rc genhtml_function_coverage=1 00:07:16.207 --rc genhtml_legend=1 00:07:16.208 --rc geninfo_all_blocks=1 00:07:16.208 --rc geninfo_unexecuted_blocks=1 00:07:16.208 00:07:16.208 ' 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.208 --rc genhtml_branch_coverage=1 00:07:16.208 --rc genhtml_function_coverage=1 00:07:16.208 --rc genhtml_legend=1 00:07:16.208 --rc geninfo_all_blocks=1 00:07:16.208 --rc geninfo_unexecuted_blocks=1 00:07:16.208 00:07:16.208 ' 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.208 --rc genhtml_branch_coverage=1 00:07:16.208 --rc genhtml_function_coverage=1 00:07:16.208 --rc genhtml_legend=1 00:07:16.208 --rc geninfo_all_blocks=1 00:07:16.208 --rc geninfo_unexecuted_blocks=1 00:07:16.208 00:07:16.208 ' 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.208 --rc genhtml_branch_coverage=1 00:07:16.208 --rc genhtml_function_coverage=1 00:07:16.208 --rc genhtml_legend=1 00:07:16.208 --rc geninfo_all_blocks=1 00:07:16.208 --rc geninfo_unexecuted_blocks=1 00:07:16.208 00:07:16.208 ' 00:07:16.208 13:06:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3046646 00:07:16.208 13:06:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:16.208 13:06:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.208 13:06:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3046646 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@835 -- # '[' -z 3046646 ']' 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.208 13:06:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.208 [2024-11-25 13:06:13.803490] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:16.208 [2024-11-25 13:06:13.803589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046646 ] 00:07:16.466 [2024-11-25 13:06:13.870238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.466 [2024-11-25 13:06:13.927360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:16.466 [2024-11-25 13:06:13.927414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3046646' to capture a snapshot of events at runtime. 00:07:16.466 [2024-11-25 13:06:13.927442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.466 [2024-11-25 13:06:13.927454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.466 [2024-11-25 13:06:13.927463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3046646 for offline analysis/debug. 00:07:16.466 [2024-11-25 13:06:13.928076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.725 13:06:14 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.725 13:06:14 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:16.725 13:06:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:16.725 13:06:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:16.725 13:06:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:16.725 13:06:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:16.725 13:06:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.725 13:06:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.725 13:06:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.725 ************************************ 00:07:16.725 START TEST rpc_integrity 00:07:16.725 ************************************ 00:07:16.725 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:16.725 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:16.725 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.725 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:16.726 { 00:07:16.726 "name": "Malloc0", 00:07:16.726 "aliases": [ 00:07:16.726 "56e472a5-f306-4484-894e-072c3fb6070f" 00:07:16.726 ], 00:07:16.726 "product_name": "Malloc disk", 00:07:16.726 "block_size": 512, 00:07:16.726 "num_blocks": 16384, 00:07:16.726 "uuid": "56e472a5-f306-4484-894e-072c3fb6070f", 00:07:16.726 "assigned_rate_limits": { 00:07:16.726 "rw_ios_per_sec": 0, 00:07:16.726 "rw_mbytes_per_sec": 0, 00:07:16.726 "r_mbytes_per_sec": 0, 00:07:16.726 "w_mbytes_per_sec": 0 00:07:16.726 }, 00:07:16.726 "claimed": false, 00:07:16.726 "zoned": false, 00:07:16.726 "supported_io_types": { 00:07:16.726 "read": true, 00:07:16.726 "write": true, 00:07:16.726 "unmap": true, 00:07:16.726 "flush": true, 00:07:16.726 "reset": true, 00:07:16.726 "nvme_admin": false, 00:07:16.726 "nvme_io": false, 00:07:16.726 "nvme_io_md": false, 00:07:16.726 "write_zeroes": true, 00:07:16.726 "zcopy": true, 00:07:16.726 "get_zone_info": false, 00:07:16.726 "zone_management": false, 00:07:16.726 "zone_append": false, 00:07:16.726 "compare": false, 00:07:16.726 "compare_and_write": false, 00:07:16.726 "abort": true, 00:07:16.726 "seek_hole": false, 00:07:16.726 "seek_data": false, 00:07:16.726 "copy": true, 00:07:16.726 "nvme_iov_md": false 00:07:16.726 }, 00:07:16.726 "memory_domains": [ 00:07:16.726 { 00:07:16.726 "dma_device_id": "system", 00:07:16.726 "dma_device_type": 1 00:07:16.726 }, 00:07:16.726 { 00:07:16.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.726 "dma_device_type": 2 00:07:16.726 } 00:07:16.726 ], 00:07:16.726 "driver_specific": {} 00:07:16.726 } 00:07:16.726 ]' 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.726 [2024-11-25 13:06:14.310917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:16.726 [2024-11-25 13:06:14.310966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.726 [2024-11-25 13:06:14.310989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe3a490 00:07:16.726 [2024-11-25 13:06:14.311002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.726 [2024-11-25 13:06:14.312375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.726 [2024-11-25 13:06:14.312402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:16.726 Passthru0 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.726 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.726 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:16.726 { 00:07:16.726 "name": "Malloc0", 00:07:16.726 "aliases": [ 00:07:16.726 "56e472a5-f306-4484-894e-072c3fb6070f" 00:07:16.726 ], 00:07:16.726 "product_name": "Malloc disk", 00:07:16.726 "block_size": 512, 00:07:16.726 "num_blocks": 16384, 00:07:16.726 "uuid": "56e472a5-f306-4484-894e-072c3fb6070f", 00:07:16.726 "assigned_rate_limits": { 00:07:16.726 "rw_ios_per_sec": 0, 00:07:16.726 "rw_mbytes_per_sec": 0, 00:07:16.726 "r_mbytes_per_sec": 0, 00:07:16.726 "w_mbytes_per_sec": 0 00:07:16.726 }, 00:07:16.726 "claimed": true, 00:07:16.726 "claim_type": "exclusive_write", 00:07:16.726 "zoned": false, 00:07:16.726 "supported_io_types": { 00:07:16.726 "read": true, 00:07:16.726 "write": true, 00:07:16.726 "unmap": true, 00:07:16.726 "flush": true, 00:07:16.726 "reset": true, 00:07:16.726 "nvme_admin": false, 00:07:16.726 "nvme_io": false, 00:07:16.726 "nvme_io_md": false, 00:07:16.726 "write_zeroes": true, 00:07:16.726 "zcopy": true, 00:07:16.726 "get_zone_info": false, 00:07:16.726 "zone_management": false, 00:07:16.726 "zone_append": false, 00:07:16.726 "compare": false, 00:07:16.726 "compare_and_write": false, 00:07:16.726 "abort": true, 00:07:16.726 "seek_hole": false, 00:07:16.726 "seek_data": false, 00:07:16.726 "copy": true, 00:07:16.726 "nvme_iov_md": false 00:07:16.726 }, 00:07:16.726 "memory_domains": [ 00:07:16.726 { 00:07:16.726 "dma_device_id": "system", 00:07:16.726 "dma_device_type": 1 00:07:16.726 }, 00:07:16.726 { 00:07:16.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.726 "dma_device_type": 2 00:07:16.726 } 00:07:16.726 ], 00:07:16.726 "driver_specific": {} 00:07:16.726 }, 00:07:16.726 { 00:07:16.726 "name": "Passthru0", 00:07:16.726 "aliases": [ 00:07:16.726 "0c577bf5-8173-59dc-a0d7-6cc7618b8b1d" 00:07:16.726 ], 00:07:16.726 "product_name": "passthru", 00:07:16.726 "block_size": 512, 00:07:16.726 "num_blocks": 16384, 00:07:16.727 "uuid": "0c577bf5-8173-59dc-a0d7-6cc7618b8b1d", 00:07:16.727 "assigned_rate_limits": { 00:07:16.727 "rw_ios_per_sec": 0, 00:07:16.727 "rw_mbytes_per_sec": 0, 00:07:16.727 "r_mbytes_per_sec": 0, 00:07:16.727 "w_mbytes_per_sec": 0 00:07:16.727 }, 00:07:16.727 "claimed": false, 00:07:16.727 "zoned": false, 00:07:16.727 "supported_io_types": { 00:07:16.727 "read": true, 00:07:16.727 "write": true, 00:07:16.727 "unmap": true, 00:07:16.727 "flush": true, 00:07:16.727 "reset": true, 00:07:16.727 "nvme_admin": false, 00:07:16.727 "nvme_io": false, 00:07:16.727 "nvme_io_md": false, 00:07:16.727 "write_zeroes": true, 00:07:16.727 "zcopy": true, 00:07:16.727 "get_zone_info": false, 00:07:16.727 "zone_management": false, 00:07:16.727 "zone_append": false, 00:07:16.727 "compare": false, 00:07:16.727 "compare_and_write": false, 00:07:16.727 "abort": true, 00:07:16.727 "seek_hole": false, 00:07:16.727 "seek_data": false, 00:07:16.727 "copy": true, 00:07:16.727 "nvme_iov_md": false 00:07:16.727 }, 00:07:16.727 "memory_domains": [ 00:07:16.727 { 00:07:16.727 "dma_device_id": "system", 00:07:16.727 "dma_device_type": 1 00:07:16.727 }, 00:07:16.727 { 00:07:16.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.727 "dma_device_type": 2 00:07:16.727 } 00:07:16.727 ], 00:07:16.727 "driver_specific": { 00:07:16.727 "passthru": { 00:07:16.727 "name": "Passthru0", 00:07:16.727 "base_bdev_name": "Malloc0" 00:07:16.727 } 00:07:16.727 } 00:07:16.727 } 00:07:16.727 ]' 00:07:16.727 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:16.727 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:16.727 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:16.727 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.727 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.727 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.727 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:16.727 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.727 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:16.727 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.727 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:16.727 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.727 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.003 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.003 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:17.003 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:17.003 13:06:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:17.003 00:07:17.003 real 0m0.214s 00:07:17.003 user 0m0.142s 00:07:17.003 sys 0m0.018s 00:07:17.003 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.003 13:06:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.003 ************************************ 00:07:17.003 END TEST rpc_integrity 00:07:17.003 ************************************ 00:07:17.003 13:06:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:17.003 13:06:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.003 13:06:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.003 13:06:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.003 ************************************ 00:07:17.003 START TEST rpc_plugins 00:07:17.003 ************************************ 00:07:17.003 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:17.003 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:17.003 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.003 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:17.003 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.003 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:17.003 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:17.003 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.003 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:17.003 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.003 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:17.003 { 00:07:17.003 "name": "Malloc1", 00:07:17.003 "aliases": [ 00:07:17.003 "670bf2fa-4ba9-48ee-8b00-9a15295e72ec" 00:07:17.003 ], 00:07:17.004 "product_name": "Malloc disk", 00:07:17.004 "block_size": 4096, 00:07:17.004 "num_blocks": 256, 00:07:17.004 "uuid": "670bf2fa-4ba9-48ee-8b00-9a15295e72ec", 00:07:17.004 "assigned_rate_limits": { 00:07:17.004 "rw_ios_per_sec": 0, 00:07:17.004 "rw_mbytes_per_sec": 0, 00:07:17.004 "r_mbytes_per_sec": 0, 00:07:17.004 "w_mbytes_per_sec": 0 00:07:17.004 }, 00:07:17.004 "claimed": false, 00:07:17.004 "zoned": false, 00:07:17.004 "supported_io_types": { 00:07:17.004 "read": true, 00:07:17.004 "write": true, 00:07:17.004 "unmap": true, 00:07:17.004 "flush": true, 00:07:17.004 "reset": true, 00:07:17.004 "nvme_admin": false, 00:07:17.004 "nvme_io": false, 00:07:17.004 "nvme_io_md": false, 00:07:17.004 "write_zeroes": true, 00:07:17.004 "zcopy": true, 00:07:17.004 "get_zone_info": false, 00:07:17.004 "zone_management": false, 00:07:17.004 "zone_append": false, 00:07:17.004 "compare": false, 00:07:17.004 "compare_and_write": false, 00:07:17.004 "abort": true, 00:07:17.004 "seek_hole": false, 00:07:17.004 "seek_data": false, 00:07:17.004 "copy": true, 00:07:17.004 "nvme_iov_md": false 00:07:17.004 }, 00:07:17.004 "memory_domains": [ 00:07:17.004 { 00:07:17.004 "dma_device_id": "system", 00:07:17.004 "dma_device_type": 1 00:07:17.004 }, 00:07:17.004 { 00:07:17.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.004 "dma_device_type": 2 00:07:17.004 } 00:07:17.004 ], 00:07:17.004 "driver_specific": {} 00:07:17.004 } 00:07:17.004 ]' 00:07:17.004 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:17.004 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:17.004 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:17.004 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.004 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:17.004 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.004 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:17.004 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.004 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:17.004 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.004 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:17.004 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:17.004 13:06:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:17.004 00:07:17.004 real 0m0.107s 00:07:17.004 user 0m0.066s 00:07:17.004 sys 0m0.010s 00:07:17.004 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.004 13:06:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:17.004 ************************************ 00:07:17.004 END TEST rpc_plugins 00:07:17.004 ************************************ 00:07:17.004 13:06:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:17.004 13:06:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.004 13:06:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.004 13:06:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.004 ************************************ 00:07:17.004 START TEST rpc_trace_cmd_test 00:07:17.004 ************************************ 00:07:17.004 13:06:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:17.004 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:17.004 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:17.004 13:06:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.004 13:06:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.004 13:06:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.004 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:17.004 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3046646", 00:07:17.004 "tpoint_group_mask": "0x8", 00:07:17.004 "iscsi_conn": { 00:07:17.004 "mask": "0x2", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "scsi": { 00:07:17.004 "mask": "0x4", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "bdev": { 00:07:17.004 "mask": "0x8", 00:07:17.004 "tpoint_mask": "0xffffffffffffffff" 00:07:17.004 }, 00:07:17.004 "nvmf_rdma": { 00:07:17.004 "mask": "0x10", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "nvmf_tcp": { 00:07:17.004 "mask": "0x20", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "ftl": { 00:07:17.004 "mask": "0x40", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "blobfs": { 00:07:17.004 "mask": "0x80", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "dsa": { 00:07:17.004 "mask": "0x200", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "thread": { 00:07:17.004 "mask": "0x400", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "nvme_pcie": { 00:07:17.004 "mask": "0x800", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "iaa": { 00:07:17.004 "mask": "0x1000", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "nvme_tcp": { 00:07:17.004 "mask": "0x2000", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "bdev_nvme": { 00:07:17.004 "mask": "0x4000", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "sock": { 00:07:17.004 "mask": "0x8000", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "blob": { 00:07:17.004 "mask": "0x10000", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "bdev_raid": { 00:07:17.004 "mask": "0x20000", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 }, 00:07:17.004 "scheduler": { 00:07:17.004 "mask": "0x40000", 00:07:17.004 "tpoint_mask": "0x0" 00:07:17.004 } 00:07:17.004 }' 00:07:17.004 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:17.264 00:07:17.264 real 0m0.203s 00:07:17.264 user 0m0.181s 00:07:17.264 sys 0m0.012s 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.264 13:06:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.264 ************************************ 00:07:17.264 END TEST rpc_trace_cmd_test 00:07:17.264 ************************************ 00:07:17.264 13:06:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:17.264 13:06:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:17.264 13:06:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:17.264 13:06:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.264 13:06:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.264 13:06:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.264 ************************************ 00:07:17.264 START TEST rpc_daemon_integrity 00:07:17.264 ************************************ 00:07:17.264 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:17.264 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:17.264 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.264 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.264 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.264 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:17.264 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:17.522 { 00:07:17.522 "name": "Malloc2", 00:07:17.522 "aliases": [ 00:07:17.522 "104717f3-bf53-43d3-9dc0-8ece03824682" 00:07:17.522 ], 00:07:17.522 "product_name": "Malloc disk", 00:07:17.522 "block_size": 512, 00:07:17.522 "num_blocks": 16384, 00:07:17.522 "uuid": "104717f3-bf53-43d3-9dc0-8ece03824682", 00:07:17.522 "assigned_rate_limits": { 00:07:17.522 "rw_ios_per_sec": 0, 00:07:17.522 "rw_mbytes_per_sec": 0, 00:07:17.522 "r_mbytes_per_sec": 0, 00:07:17.522 "w_mbytes_per_sec": 0 00:07:17.522 }, 00:07:17.522 "claimed": false, 00:07:17.522 "zoned": false, 00:07:17.522 "supported_io_types": { 00:07:17.522 "read": true, 00:07:17.522 "write": true, 00:07:17.522 "unmap": true, 00:07:17.522 "flush": true, 00:07:17.522 "reset": true, 00:07:17.522 "nvme_admin": false, 00:07:17.522 "nvme_io": false, 00:07:17.522 "nvme_io_md": false, 00:07:17.522 "write_zeroes": true, 00:07:17.522 "zcopy": true, 00:07:17.522 "get_zone_info": false, 00:07:17.522 "zone_management": false, 00:07:17.522 "zone_append": false, 00:07:17.522 "compare": false, 00:07:17.522 "compare_and_write": false, 00:07:17.522 "abort": true, 00:07:17.522 "seek_hole": false, 00:07:17.522 "seek_data": false, 00:07:17.522 "copy": true, 00:07:17.522 "nvme_iov_md": false 00:07:17.522 }, 00:07:17.522 "memory_domains": [ 00:07:17.522 { 00:07:17.522 "dma_device_id": "system", 00:07:17.522 "dma_device_type": 1 00:07:17.522 }, 00:07:17.522 { 00:07:17.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.522 "dma_device_type": 2 00:07:17.522 } 00:07:17.522 ], 00:07:17.522 "driver_specific": {} 00:07:17.522 } 00:07:17.522 ]' 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.522 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.522 [2024-11-25 13:06:14.981562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:17.522 [2024-11-25 13:06:14.981618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.522 [2024-11-25 13:06:14.981644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe3a710 00:07:17.523 [2024-11-25 13:06:14.981657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.523 [2024-11-25 13:06:14.982852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.523 [2024-11-25 13:06:14.982880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:17.523 Passthru0 00:07:17.523 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.523 13:06:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:17.523 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.523 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.523 13:06:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:17.523 { 00:07:17.523 "name": "Malloc2", 00:07:17.523 "aliases": [ 00:07:17.523 "104717f3-bf53-43d3-9dc0-8ece03824682" 00:07:17.523 ], 00:07:17.523 "product_name": "Malloc disk", 00:07:17.523 "block_size": 512, 00:07:17.523 "num_blocks": 16384, 00:07:17.523 "uuid": "104717f3-bf53-43d3-9dc0-8ece03824682", 00:07:17.523 "assigned_rate_limits": { 00:07:17.523 "rw_ios_per_sec": 0, 00:07:17.523 "rw_mbytes_per_sec": 0, 00:07:17.523 "r_mbytes_per_sec": 0, 00:07:17.523 "w_mbytes_per_sec": 0 00:07:17.523 }, 00:07:17.523 "claimed": true, 00:07:17.523 "claim_type": "exclusive_write", 00:07:17.523 "zoned": false, 00:07:17.523 "supported_io_types": { 00:07:17.523 "read": true, 00:07:17.523 "write": true, 00:07:17.523 "unmap": true, 00:07:17.523 "flush": true, 00:07:17.523 "reset": true, 00:07:17.523 "nvme_admin": false, 00:07:17.523 "nvme_io": false, 00:07:17.523 "nvme_io_md": false, 00:07:17.523 "write_zeroes": true, 00:07:17.523 "zcopy": true, 00:07:17.523 "get_zone_info": false, 00:07:17.523 "zone_management": false, 00:07:17.523 "zone_append": false, 00:07:17.523 "compare": false, 00:07:17.523 "compare_and_write": false, 00:07:17.523 "abort": true, 00:07:17.523 "seek_hole": false, 00:07:17.523 "seek_data": false, 00:07:17.523 "copy": true, 00:07:17.523 "nvme_iov_md": false 00:07:17.523 }, 00:07:17.523 "memory_domains": [ 00:07:17.523 { 00:07:17.523 "dma_device_id": "system", 00:07:17.523 "dma_device_type": 1 00:07:17.523 }, 00:07:17.523 { 00:07:17.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.523 "dma_device_type": 2 00:07:17.523 } 00:07:17.523 ], 00:07:17.523 "driver_specific": {} 00:07:17.523 }, 00:07:17.523 { 00:07:17.523 "name": "Passthru0", 00:07:17.523 "aliases": [ 00:07:17.523 "06cd4f85-fd77-54c7-883c-a3ad79817f25" 00:07:17.523 ], 00:07:17.523 "product_name": "passthru", 00:07:17.523 "block_size": 512, 00:07:17.523 "num_blocks": 16384, 00:07:17.523 "uuid": "06cd4f85-fd77-54c7-883c-a3ad79817f25", 00:07:17.523 "assigned_rate_limits": { 00:07:17.523 "rw_ios_per_sec": 0, 00:07:17.523 "rw_mbytes_per_sec": 0, 00:07:17.523 "r_mbytes_per_sec": 0, 00:07:17.523 "w_mbytes_per_sec": 0 00:07:17.523 }, 00:07:17.523 "claimed": false, 00:07:17.523 "zoned": false, 00:07:17.523 "supported_io_types": { 00:07:17.523 "read": true, 00:07:17.523 "write": true, 00:07:17.523 "unmap": true, 00:07:17.523 "flush": true, 00:07:17.523 "reset": true, 00:07:17.523 "nvme_admin": false, 00:07:17.523 "nvme_io": false, 00:07:17.523 "nvme_io_md": false, 00:07:17.523 "write_zeroes": true, 00:07:17.523 "zcopy": true, 00:07:17.523 "get_zone_info": false, 00:07:17.523 "zone_management": false, 00:07:17.523 "zone_append": false, 00:07:17.523 "compare": false, 00:07:17.523 "compare_and_write": false, 00:07:17.523 "abort": true, 00:07:17.523 "seek_hole": false, 00:07:17.523 "seek_data": false, 00:07:17.523 "copy": true, 00:07:17.523 "nvme_iov_md": false 00:07:17.523 }, 00:07:17.523 "memory_domains": [ 00:07:17.523 { 00:07:17.523 "dma_device_id": "system", 00:07:17.523 "dma_device_type": 1 00:07:17.523 }, 00:07:17.523 { 00:07:17.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.523 "dma_device_type": 2 00:07:17.523 } 00:07:17.523 ], 00:07:17.523 "driver_specific": { 00:07:17.523 "passthru": { 00:07:17.523 "name": "Passthru0", 00:07:17.523 "base_bdev_name": "Malloc2" 00:07:17.523 } 00:07:17.523 } 00:07:17.523 } 00:07:17.523 ]' 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:17.523 00:07:17.523 real 0m0.210s 00:07:17.523 user 0m0.133s 00:07:17.523 sys 0m0.021s 00:07:17.523 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.524 13:06:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:17.524 ************************************ 00:07:17.524 END TEST rpc_daemon_integrity 00:07:17.524 ************************************ 00:07:17.524 13:06:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:17.524 13:06:15 rpc -- rpc/rpc.sh@84 -- # killprocess 3046646 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@954 -- # '[' -z 3046646 ']' 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@958 -- # kill -0 3046646 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@959 -- # uname 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046646 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046646' 00:07:17.524 killing process with pid 3046646 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@973 -- # kill 3046646 00:07:17.524 13:06:15 rpc -- common/autotest_common.sh@978 -- # wait 3046646 00:07:18.089 00:07:18.089 real 0m1.962s 00:07:18.089 user 0m2.443s 00:07:18.089 sys 0m0.599s 00:07:18.089 13:06:15 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.089 13:06:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.089 ************************************ 00:07:18.089 END TEST rpc 00:07:18.089 ************************************ 00:07:18.089 13:06:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:18.089 13:06:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.089 13:06:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.089 13:06:15 -- common/autotest_common.sh@10 -- # set +x 00:07:18.089 ************************************ 00:07:18.089 START TEST skip_rpc 00:07:18.089 ************************************ 00:07:18.089 13:06:15 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:18.089 * Looking for test storage... 00:07:18.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:18.089 13:06:15 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.089 13:06:15 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.089 13:06:15 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.347 13:06:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.347 --rc genhtml_branch_coverage=1 00:07:18.347 --rc genhtml_function_coverage=1 00:07:18.347 --rc genhtml_legend=1 00:07:18.347 --rc geninfo_all_blocks=1 00:07:18.347 --rc geninfo_unexecuted_blocks=1 00:07:18.347 00:07:18.347 ' 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.347 --rc genhtml_branch_coverage=1 00:07:18.347 --rc genhtml_function_coverage=1 00:07:18.347 --rc genhtml_legend=1 00:07:18.347 --rc geninfo_all_blocks=1 00:07:18.347 --rc geninfo_unexecuted_blocks=1 00:07:18.347 00:07:18.347 ' 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.347 --rc genhtml_branch_coverage=1 00:07:18.347 --rc genhtml_function_coverage=1 00:07:18.347 --rc genhtml_legend=1 00:07:18.347 --rc geninfo_all_blocks=1 00:07:18.347 --rc geninfo_unexecuted_blocks=1 00:07:18.347 00:07:18.347 ' 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.347 --rc genhtml_branch_coverage=1 00:07:18.347 --rc genhtml_function_coverage=1 00:07:18.347 --rc genhtml_legend=1 00:07:18.347 --rc geninfo_all_blocks=1 00:07:18.347 --rc geninfo_unexecuted_blocks=1 00:07:18.347 00:07:18.347 ' 00:07:18.347 13:06:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:18.347 13:06:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:18.347 13:06:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.347 13:06:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.347 ************************************ 00:07:18.347 START TEST skip_rpc 00:07:18.347 ************************************ 00:07:18.347 13:06:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:18.347 13:06:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3046982 00:07:18.347 13:06:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:18.347 13:06:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.347 13:06:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:18.347 [2024-11-25 13:06:15.849711] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:18.348 [2024-11-25 13:06:15.849790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046982 ] 00:07:18.348 [2024-11-25 13:06:15.915695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.348 [2024-11-25 13:06:15.974415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3046982 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3046982 ']' 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3046982 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046982 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046982' 00:07:23.609 killing process with pid 3046982 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3046982 00:07:23.609 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3046982 00:07:23.609 00:07:23.609 real 0m5.452s 00:07:23.609 user 0m5.147s 00:07:23.609 sys 0m0.317s 00:07:23.609 13:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.609 13:06:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.609 ************************************ 00:07:23.609 END TEST skip_rpc 00:07:23.609 ************************************ 00:07:23.868 13:06:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:23.868 13:06:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.868 13:06:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.868 13:06:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.868 ************************************ 00:07:23.868 START TEST skip_rpc_with_json 00:07:23.868 ************************************ 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3047675 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3047675 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3047675 ']' 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.868 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:23.868 [2024-11-25 13:06:21.349164] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:23.868 [2024-11-25 13:06:21.349255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3047675 ] 00:07:23.868 [2024-11-25 13:06:21.414203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.868 [2024-11-25 13:06:21.474250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:24.126 [2024-11-25 13:06:21.742492] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:24.126 request: 00:07:24.126 { 00:07:24.126 "trtype": "tcp", 00:07:24.126 "method": "nvmf_get_transports", 00:07:24.126 "req_id": 1 00:07:24.126 } 00:07:24.126 Got JSON-RPC error response 00:07:24.126 response: 00:07:24.126 { 00:07:24.126 "code": -19, 00:07:24.126 "message": "No such device" 00:07:24.126 } 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:24.126 [2024-11-25 13:06:21.750628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.126 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:24.384 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.384 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:24.384 { 00:07:24.384 "subsystems": [ 00:07:24.384 { 00:07:24.384 "subsystem": "fsdev", 00:07:24.384 "config": [ 00:07:24.384 { 00:07:24.384 "method": "fsdev_set_opts", 00:07:24.384 "params": { 00:07:24.384 "fsdev_io_pool_size": 65535, 00:07:24.384 "fsdev_io_cache_size": 256 00:07:24.384 } 00:07:24.384 } 00:07:24.384 ] 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "subsystem": "vfio_user_target", 00:07:24.384 "config": null 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "subsystem": "keyring", 00:07:24.384 "config": [] 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "subsystem": "iobuf", 00:07:24.384 "config": [ 00:07:24.384 { 00:07:24.384 "method": "iobuf_set_options", 00:07:24.384 "params": { 00:07:24.384 "small_pool_count": 8192, 00:07:24.384 "large_pool_count": 1024, 00:07:24.384 "small_bufsize": 8192, 00:07:24.384 "large_bufsize": 135168, 00:07:24.384 "enable_numa": false 00:07:24.384 } 00:07:24.384 } 00:07:24.384 ] 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "subsystem": "sock", 00:07:24.384 "config": [ 00:07:24.384 { 00:07:24.384 "method": "sock_set_default_impl", 00:07:24.384 "params": { 00:07:24.384 "impl_name": "posix" 00:07:24.384 } 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "method": "sock_impl_set_options", 00:07:24.384 "params": { 00:07:24.384 "impl_name": "ssl", 00:07:24.384 "recv_buf_size": 4096, 00:07:24.384 "send_buf_size": 4096, 00:07:24.384 "enable_recv_pipe": true, 00:07:24.384 "enable_quickack": false, 00:07:24.384 "enable_placement_id": 0, 00:07:24.384 "enable_zerocopy_send_server": true, 00:07:24.384 "enable_zerocopy_send_client": false, 00:07:24.384 "zerocopy_threshold": 0, 00:07:24.384 "tls_version": 0, 00:07:24.384 "enable_ktls": false 00:07:24.384 } 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "method": "sock_impl_set_options", 00:07:24.384 "params": { 00:07:24.384 "impl_name": "posix", 00:07:24.384 "recv_buf_size": 2097152, 00:07:24.384 "send_buf_size": 2097152, 00:07:24.384 "enable_recv_pipe": true, 00:07:24.384 "enable_quickack": false, 00:07:24.384 "enable_placement_id": 0, 00:07:24.384 "enable_zerocopy_send_server": true, 00:07:24.384 "enable_zerocopy_send_client": false, 00:07:24.384 "zerocopy_threshold": 0, 00:07:24.384 "tls_version": 0, 00:07:24.384 "enable_ktls": false 00:07:24.384 } 00:07:24.384 } 00:07:24.384 ] 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "subsystem": "vmd", 00:07:24.384 "config": [] 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "subsystem": "accel", 00:07:24.384 "config": [ 00:07:24.384 { 00:07:24.384 "method": "accel_set_options", 00:07:24.384 "params": { 00:07:24.384 "small_cache_size": 128, 00:07:24.384 "large_cache_size": 16, 00:07:24.384 "task_count": 2048, 00:07:24.384 "sequence_count": 2048, 00:07:24.384 "buf_count": 2048 00:07:24.384 } 00:07:24.384 } 00:07:24.384 ] 00:07:24.384 }, 00:07:24.384 { 00:07:24.384 "subsystem": "bdev", 00:07:24.384 "config": [ 00:07:24.384 { 00:07:24.384 "method": "bdev_set_options", 00:07:24.384 "params": { 00:07:24.384 "bdev_io_pool_size": 65535, 00:07:24.384 "bdev_io_cache_size": 256, 00:07:24.384 "bdev_auto_examine": true, 00:07:24.384 "iobuf_small_cache_size": 128, 00:07:24.384 "iobuf_large_cache_size": 16 00:07:24.384 } 00:07:24.384 }, 00:07:24.385 { 00:07:24.385 "method": "bdev_raid_set_options", 00:07:24.385 "params": { 00:07:24.385 "process_window_size_kb": 1024, 00:07:24.385 "process_max_bandwidth_mb_sec": 0 00:07:24.385 } 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "method": "bdev_iscsi_set_options", 00:07:24.385 "params": { 00:07:24.385 "timeout_sec": 30 00:07:24.385 } 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "method": "bdev_nvme_set_options", 00:07:24.385 "params": { 00:07:24.385 "action_on_timeout": "none", 00:07:24.385 "timeout_us": 0, 00:07:24.385 "timeout_admin_us": 0, 00:07:24.385 "keep_alive_timeout_ms": 10000, 00:07:24.385 "arbitration_burst": 0, 00:07:24.385 "low_priority_weight": 0, 00:07:24.385 "medium_priority_weight": 0, 00:07:24.385 "high_priority_weight": 0, 00:07:24.385 "nvme_adminq_poll_period_us": 10000, 00:07:24.385 "nvme_ioq_poll_period_us": 0, 00:07:24.385 "io_queue_requests": 0, 00:07:24.385 "delay_cmd_submit": true, 00:07:24.385 "transport_retry_count": 4, 00:07:24.385 "bdev_retry_count": 3, 00:07:24.385 "transport_ack_timeout": 0, 00:07:24.385 "ctrlr_loss_timeout_sec": 0, 00:07:24.385 "reconnect_delay_sec": 0, 00:07:24.385 "fast_io_fail_timeout_sec": 0, 00:07:24.385 "disable_auto_failback": false, 00:07:24.385 "generate_uuids": false, 00:07:24.385 "transport_tos": 0, 00:07:24.385 "nvme_error_stat": false, 00:07:24.385 "rdma_srq_size": 0, 00:07:24.385 "io_path_stat": false, 00:07:24.385 "allow_accel_sequence": false, 00:07:24.385 "rdma_max_cq_size": 0, 00:07:24.385 "rdma_cm_event_timeout_ms": 0, 00:07:24.385 "dhchap_digests": [ 00:07:24.385 "sha256", 00:07:24.385 "sha384", 00:07:24.385 "sha512" 00:07:24.385 ], 00:07:24.385 "dhchap_dhgroups": [ 00:07:24.385 "null", 00:07:24.385 "ffdhe2048", 00:07:24.385 "ffdhe3072", 00:07:24.385 "ffdhe4096", 00:07:24.385 "ffdhe6144", 00:07:24.385 "ffdhe8192" 00:07:24.385 ] 00:07:24.385 } 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "method": "bdev_nvme_set_hotplug", 00:07:24.385 "params": { 00:07:24.385 "period_us": 100000, 00:07:24.385 "enable": false 00:07:24.385 } 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "method": "bdev_wait_for_examine" 00:07:24.385 } 00:07:24.385 ] 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "subsystem": "scsi", 00:07:24.385 "config": null 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "subsystem": "scheduler", 00:07:24.385 "config": [ 00:07:24.385 { 00:07:24.385 "method": "framework_set_scheduler", 00:07:24.385 "params": { 00:07:24.385 "name": "static" 00:07:24.385 } 00:07:24.385 } 00:07:24.385 ] 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "subsystem": "vhost_scsi", 00:07:24.385 "config": [] 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "subsystem": "vhost_blk", 00:07:24.385 "config": [] 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "subsystem": "ublk", 00:07:24.385 "config": [] 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "subsystem": "nbd", 00:07:24.385 "config": [] 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "subsystem": "nvmf", 00:07:24.385 "config": [ 00:07:24.385 { 00:07:24.385 "method": "nvmf_set_config", 00:07:24.385 "params": { 00:07:24.385 "discovery_filter": "match_any", 00:07:24.385 "admin_cmd_passthru": { 00:07:24.385 "identify_ctrlr": false 00:07:24.385 }, 00:07:24.385 "dhchap_digests": [ 00:07:24.385 "sha256", 00:07:24.385 "sha384", 00:07:24.385 "sha512" 00:07:24.385 ], 00:07:24.385 "dhchap_dhgroups": [ 00:07:24.385 "null", 00:07:24.385 "ffdhe2048", 00:07:24.385 "ffdhe3072", 00:07:24.385 "ffdhe4096", 00:07:24.385 "ffdhe6144", 00:07:24.385 "ffdhe8192" 00:07:24.385 ] 00:07:24.385 } 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "method": "nvmf_set_max_subsystems", 00:07:24.385 "params": { 00:07:24.385 "max_subsystems": 1024 00:07:24.385 } 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "method": "nvmf_set_crdt", 00:07:24.385 "params": { 00:07:24.385 "crdt1": 0, 00:07:24.385 "crdt2": 0, 00:07:24.385 "crdt3": 0 00:07:24.385 } 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "method": "nvmf_create_transport", 00:07:24.385 "params": { 00:07:24.385 "trtype": "TCP", 00:07:24.385 "max_queue_depth": 128, 00:07:24.385 "max_io_qpairs_per_ctrlr": 127, 00:07:24.385 "in_capsule_data_size": 4096, 00:07:24.385 "max_io_size": 131072, 00:07:24.385 "io_unit_size": 131072, 00:07:24.385 "max_aq_depth": 128, 00:07:24.385 "num_shared_buffers": 511, 00:07:24.385 "buf_cache_size": 4294967295, 00:07:24.385 "dif_insert_or_strip": false, 00:07:24.385 "zcopy": false, 00:07:24.385 "c2h_success": true, 00:07:24.385 "sock_priority": 0, 00:07:24.385 "abort_timeout_sec": 1, 00:07:24.385 "ack_timeout": 0, 00:07:24.385 "data_wr_pool_size": 0 00:07:24.385 } 00:07:24.385 } 00:07:24.385 ] 00:07:24.385 }, 00:07:24.385 { 00:07:24.385 "subsystem": "iscsi", 00:07:24.385 "config": [ 00:07:24.385 { 00:07:24.385 "method": "iscsi_set_options", 00:07:24.385 "params": { 00:07:24.385 "node_base": "iqn.2016-06.io.spdk", 00:07:24.385 "max_sessions": 128, 00:07:24.385 "max_connections_per_session": 2, 00:07:24.385 "max_queue_depth": 64, 00:07:24.385 "default_time2wait": 2, 00:07:24.385 "default_time2retain": 20, 00:07:24.385 "first_burst_length": 8192, 00:07:24.385 "immediate_data": true, 00:07:24.385 "allow_duplicated_isid": false, 00:07:24.385 "error_recovery_level": 0, 00:07:24.385 "nop_timeout": 60, 00:07:24.385 "nop_in_interval": 30, 00:07:24.385 "disable_chap": false, 00:07:24.385 "require_chap": false, 00:07:24.385 "mutual_chap": false, 00:07:24.385 "chap_group": 0, 00:07:24.385 "max_large_datain_per_connection": 64, 00:07:24.385 "max_r2t_per_connection": 4, 00:07:24.385 "pdu_pool_size": 36864, 00:07:24.385 "immediate_data_pool_size": 16384, 00:07:24.385 "data_out_pool_size": 2048 00:07:24.385 } 00:07:24.385 } 00:07:24.385 ] 00:07:24.385 } 00:07:24.386 ] 00:07:24.386 } 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3047675 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3047675 ']' 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3047675 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3047675 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3047675' 00:07:24.386 killing process with pid 3047675 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3047675 00:07:24.386 13:06:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3047675 00:07:24.951 13:06:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3047815 00:07:24.951 13:06:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:24.951 13:06:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:30.212 13:06:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3047815 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3047815 ']' 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3047815 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3047815 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3047815' 00:07:30.213 killing process with pid 3047815 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3047815 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3047815 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:30.213 00:07:30.213 real 0m6.540s 00:07:30.213 user 0m6.204s 00:07:30.213 sys 0m0.657s 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.213 13:06:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:30.213 ************************************ 00:07:30.213 END TEST skip_rpc_with_json 00:07:30.213 ************************************ 00:07:30.213 13:06:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:30.213 13:06:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.213 13:06:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.213 13:06:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.471 ************************************ 00:07:30.471 START TEST skip_rpc_with_delay 00:07:30.471 ************************************ 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:30.471 [2024-11-25 13:06:27.940821] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.471 00:07:30.471 real 0m0.074s 00:07:30.471 user 0m0.054s 00:07:30.471 sys 0m0.020s 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.471 13:06:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:30.471 ************************************ 00:07:30.471 END TEST skip_rpc_with_delay 00:07:30.471 ************************************ 00:07:30.471 13:06:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:30.471 13:06:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:30.471 13:06:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:30.471 13:06:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.471 13:06:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.471 13:06:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.471 ************************************ 00:07:30.471 START TEST exit_on_failed_rpc_init 00:07:30.471 ************************************ 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3048528 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3048528 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3048528 ']' 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.471 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:30.471 [2024-11-25 13:06:28.066213] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:30.471 [2024-11-25 13:06:28.066318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048528 ] 00:07:30.729 [2024-11-25 13:06:28.130593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.729 [2024-11-25 13:06:28.189670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:30.987 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:30.987 [2024-11-25 13:06:28.522929] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:30.987 [2024-11-25 13:06:28.523008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048659 ] 00:07:30.987 [2024-11-25 13:06:28.589016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.245 [2024-11-25 13:06:28.649312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.245 [2024-11-25 13:06:28.649417] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:31.245 [2024-11-25 13:06:28.649437] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:31.245 [2024-11-25 13:06:28.649448] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3048528 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3048528 ']' 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3048528 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3048528 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3048528' 00:07:31.245 killing process with pid 3048528 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3048528 00:07:31.245 13:06:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3048528 00:07:31.810 00:07:31.810 real 0m1.178s 00:07:31.810 user 0m1.314s 00:07:31.810 sys 0m0.418s 00:07:31.810 13:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.810 13:06:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:31.810 ************************************ 00:07:31.810 END TEST exit_on_failed_rpc_init 00:07:31.810 ************************************ 00:07:31.810 13:06:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:31.810 00:07:31.810 real 0m13.597s 00:07:31.810 user 0m12.907s 00:07:31.810 sys 0m1.598s 00:07:31.810 13:06:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.810 13:06:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.810 ************************************ 00:07:31.810 END TEST skip_rpc 00:07:31.810 ************************************ 00:07:31.810 13:06:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:31.810 13:06:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.810 13:06:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.810 13:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:31.810 ************************************ 00:07:31.810 START TEST rpc_client 00:07:31.810 ************************************ 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:31.810 * Looking for test storage... 00:07:31.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.810 13:06:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.810 --rc genhtml_branch_coverage=1 00:07:31.810 --rc genhtml_function_coverage=1 00:07:31.810 --rc genhtml_legend=1 00:07:31.810 --rc geninfo_all_blocks=1 00:07:31.810 --rc geninfo_unexecuted_blocks=1 00:07:31.810 00:07:31.810 ' 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.810 --rc genhtml_branch_coverage=1 00:07:31.810 --rc genhtml_function_coverage=1 00:07:31.810 --rc genhtml_legend=1 00:07:31.810 --rc geninfo_all_blocks=1 00:07:31.810 --rc geninfo_unexecuted_blocks=1 00:07:31.810 00:07:31.810 ' 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.810 --rc genhtml_branch_coverage=1 00:07:31.810 --rc genhtml_function_coverage=1 00:07:31.810 --rc genhtml_legend=1 00:07:31.810 --rc geninfo_all_blocks=1 00:07:31.810 --rc geninfo_unexecuted_blocks=1 00:07:31.810 00:07:31.810 ' 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.810 --rc genhtml_branch_coverage=1 00:07:31.810 --rc genhtml_function_coverage=1 00:07:31.810 --rc genhtml_legend=1 00:07:31.810 --rc geninfo_all_blocks=1 00:07:31.810 --rc geninfo_unexecuted_blocks=1 00:07:31.810 00:07:31.810 ' 00:07:31.810 13:06:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:31.810 OK 00:07:31.810 13:06:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:31.810 00:07:31.810 real 0m0.156s 00:07:31.810 user 0m0.106s 00:07:31.810 sys 0m0.058s 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.810 13:06:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:31.810 ************************************ 00:07:31.810 END TEST rpc_client 00:07:31.810 ************************************ 00:07:31.810 13:06:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:31.810 13:06:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.810 13:06:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.810 13:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:31.810 ************************************ 00:07:31.810 START TEST json_config 00:07:31.810 ************************************ 00:07:31.810 13:06:29 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:32.069 13:06:29 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:32.069 13:06:29 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:07:32.069 13:06:29 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:32.069 13:06:29 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:32.069 13:06:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.069 13:06:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.069 13:06:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.069 13:06:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.069 13:06:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.069 13:06:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.069 13:06:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.069 13:06:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.069 13:06:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.069 13:06:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.069 13:06:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.069 13:06:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:32.069 13:06:29 json_config -- scripts/common.sh@345 -- # : 1 00:07:32.069 13:06:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.069 13:06:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.069 13:06:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:32.069 13:06:29 json_config -- scripts/common.sh@353 -- # local d=1 00:07:32.069 13:06:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.070 13:06:29 json_config -- scripts/common.sh@355 -- # echo 1 00:07:32.070 13:06:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.070 13:06:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:32.070 13:06:29 json_config -- scripts/common.sh@353 -- # local d=2 00:07:32.070 13:06:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.070 13:06:29 json_config -- scripts/common.sh@355 -- # echo 2 00:07:32.070 13:06:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.070 13:06:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.070 13:06:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.070 13:06:29 json_config -- scripts/common.sh@368 -- # return 0 00:07:32.070 13:06:29 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.070 13:06:29 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:32.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.070 --rc genhtml_branch_coverage=1 00:07:32.070 --rc genhtml_function_coverage=1 00:07:32.070 --rc genhtml_legend=1 00:07:32.070 --rc geninfo_all_blocks=1 00:07:32.070 --rc geninfo_unexecuted_blocks=1 00:07:32.070 00:07:32.070 ' 00:07:32.070 13:06:29 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:32.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.070 --rc genhtml_branch_coverage=1 00:07:32.070 --rc genhtml_function_coverage=1 00:07:32.070 --rc genhtml_legend=1 00:07:32.070 --rc geninfo_all_blocks=1 00:07:32.070 --rc geninfo_unexecuted_blocks=1 00:07:32.070 00:07:32.070 ' 00:07:32.070 13:06:29 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:32.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.070 --rc genhtml_branch_coverage=1 00:07:32.070 --rc genhtml_function_coverage=1 00:07:32.070 --rc genhtml_legend=1 00:07:32.070 --rc geninfo_all_blocks=1 00:07:32.070 --rc geninfo_unexecuted_blocks=1 00:07:32.070 00:07:32.070 ' 00:07:32.070 13:06:29 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:32.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.070 --rc genhtml_branch_coverage=1 00:07:32.070 --rc genhtml_function_coverage=1 00:07:32.070 --rc genhtml_legend=1 00:07:32.070 --rc geninfo_all_blocks=1 00:07:32.070 --rc geninfo_unexecuted_blocks=1 00:07:32.070 00:07:32.070 ' 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.070 13:06:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.070 13:06:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.070 13:06:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.070 13:06:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.070 13:06:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.070 13:06:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.070 13:06:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.070 13:06:29 json_config -- paths/export.sh@5 -- # export PATH 00:07:32.070 13:06:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@51 -- # : 0 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:32.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:32.070 13:06:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:32.070 13:06:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:32.071 13:06:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:32.071 INFO: JSON configuration test init 00:07:32.071 13:06:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:32.071 13:06:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.071 13:06:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.071 13:06:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:32.071 13:06:29 json_config -- json_config/common.sh@9 -- # local app=target 00:07:32.071 13:06:29 json_config -- json_config/common.sh@10 -- # shift 00:07:32.071 13:06:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:32.071 13:06:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:32.071 13:06:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:32.071 13:06:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:32.071 13:06:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:32.071 13:06:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3048917 00:07:32.071 13:06:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:32.071 13:06:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:32.071 Waiting for target to run... 00:07:32.071 13:06:29 json_config -- json_config/common.sh@25 -- # waitforlisten 3048917 /var/tmp/spdk_tgt.sock 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 3048917 ']' 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:32.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.071 13:06:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.071 [2024-11-25 13:06:29.681384] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:32.071 [2024-11-25 13:06:29.681478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048917 ] 00:07:32.637 [2024-11-25 13:06:30.055793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.637 [2024-11-25 13:06:30.099014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.201 13:06:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.201 13:06:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:33.201 13:06:30 json_config -- json_config/common.sh@26 -- # echo '' 00:07:33.201 00:07:33.201 13:06:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:33.201 13:06:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:33.201 13:06:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.201 13:06:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.201 13:06:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:33.201 13:06:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:33.201 13:06:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.201 13:06:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.201 13:06:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:33.201 13:06:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:33.201 13:06:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:36.492 13:06:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.492 13:06:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:36.492 13:06:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:36.492 13:06:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@54 -- # sort 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:36.750 13:06:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.750 13:06:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:36.750 13:06:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.750 13:06:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:36.750 13:06:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:36.750 13:06:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:37.008 MallocForNvmf0 00:07:37.008 13:06:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:37.008 13:06:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:37.265 MallocForNvmf1 00:07:37.265 13:06:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:37.265 13:06:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:37.523 [2024-11-25 13:06:34.970937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.523 13:06:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.523 13:06:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.781 13:06:35 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:37.781 13:06:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:38.039 13:06:35 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:38.039 13:06:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:38.297 13:06:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:38.297 13:06:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:38.554 [2024-11-25 13:06:36.038312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:38.554 13:06:36 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:38.554 13:06:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.554 13:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.554 13:06:36 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:38.554 13:06:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.554 13:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.554 13:06:36 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:38.554 13:06:36 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:38.554 13:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:38.812 MallocBdevForConfigChangeCheck 00:07:38.812 13:06:36 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:38.812 13:06:36 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.812 13:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.812 13:06:36 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:38.813 13:06:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:39.379 13:06:36 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:39.379 INFO: shutting down applications... 00:07:39.379 13:06:36 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:39.379 13:06:36 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:39.379 13:06:36 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:39.379 13:06:36 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:40.750 Calling clear_iscsi_subsystem 00:07:40.750 Calling clear_nvmf_subsystem 00:07:40.750 Calling clear_nbd_subsystem 00:07:40.750 Calling clear_ublk_subsystem 00:07:40.750 Calling clear_vhost_blk_subsystem 00:07:40.750 Calling clear_vhost_scsi_subsystem 00:07:40.750 Calling clear_bdev_subsystem 00:07:40.750 13:06:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:40.750 13:06:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:40.750 13:06:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:40.750 13:06:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:40.750 13:06:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:40.750 13:06:38 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:41.315 13:06:38 json_config -- json_config/json_config.sh@352 -- # break 00:07:41.315 13:06:38 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:41.315 13:06:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:41.315 13:06:38 json_config -- json_config/common.sh@31 -- # local app=target 00:07:41.315 13:06:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:41.315 13:06:38 json_config -- json_config/common.sh@35 -- # [[ -n 3048917 ]] 00:07:41.315 13:06:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3048917 00:07:41.315 13:06:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:41.315 13:06:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.315 13:06:38 json_config -- json_config/common.sh@41 -- # kill -0 3048917 00:07:41.315 13:06:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:41.884 13:06:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:41.884 13:06:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:41.884 13:06:39 json_config -- json_config/common.sh@41 -- # kill -0 3048917 00:07:41.884 13:06:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:41.884 13:06:39 json_config -- json_config/common.sh@43 -- # break 00:07:41.884 13:06:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:41.884 13:06:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:41.884 SPDK target shutdown done 00:07:41.884 13:06:39 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:41.884 INFO: relaunching applications... 00:07:41.884 13:06:39 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.884 13:06:39 json_config -- json_config/common.sh@9 -- # local app=target 00:07:41.884 13:06:39 json_config -- json_config/common.sh@10 -- # shift 00:07:41.884 13:06:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:41.884 13:06:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:41.884 13:06:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:41.884 13:06:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:41.884 13:06:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:41.884 13:06:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3050118 00:07:41.884 13:06:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.884 13:06:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:41.884 Waiting for target to run... 00:07:41.884 13:06:39 json_config -- json_config/common.sh@25 -- # waitforlisten 3050118 /var/tmp/spdk_tgt.sock 00:07:41.884 13:06:39 json_config -- common/autotest_common.sh@835 -- # '[' -z 3050118 ']' 00:07:41.884 13:06:39 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:41.884 13:06:39 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.884 13:06:39 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:41.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:41.884 13:06:39 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.884 13:06:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:41.884 [2024-11-25 13:06:39.349711] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:41.884 [2024-11-25 13:06:39.349801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050118 ] 00:07:42.453 [2024-11-25 13:06:39.922443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.453 [2024-11-25 13:06:39.974098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.808 [2024-11-25 13:06:43.028045] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.808 [2024-11-25 13:06:43.060500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:45.808 13:06:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.808 13:06:43 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:45.808 13:06:43 json_config -- json_config/common.sh@26 -- # echo '' 00:07:45.808 00:07:45.808 13:06:43 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:45.808 13:06:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:45.808 INFO: Checking if target configuration is the same... 00:07:45.808 13:06:43 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:45.808 13:06:43 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:45.808 13:06:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:45.808 + '[' 2 -ne 2 ']' 00:07:45.808 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:45.808 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:45.808 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:45.808 +++ basename /dev/fd/62 00:07:45.808 ++ mktemp /tmp/62.XXX 00:07:45.808 + tmp_file_1=/tmp/62.ewS 00:07:45.808 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:45.808 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:45.808 + tmp_file_2=/tmp/spdk_tgt_config.json.2wl 00:07:45.808 + ret=0 00:07:45.808 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:46.066 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:46.066 + diff -u /tmp/62.ewS /tmp/spdk_tgt_config.json.2wl 00:07:46.066 + echo 'INFO: JSON config files are the same' 00:07:46.066 INFO: JSON config files are the same 00:07:46.066 + rm /tmp/62.ewS /tmp/spdk_tgt_config.json.2wl 00:07:46.066 + exit 0 00:07:46.066 13:06:43 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:46.066 13:06:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:46.066 INFO: changing configuration and checking if this can be detected... 00:07:46.066 13:06:43 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:46.066 13:06:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:46.323 13:06:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:46.323 13:06:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:46.323 13:06:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:46.323 + '[' 2 -ne 2 ']' 00:07:46.323 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:46.323 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:46.324 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:46.324 +++ basename /dev/fd/62 00:07:46.324 ++ mktemp /tmp/62.XXX 00:07:46.324 + tmp_file_1=/tmp/62.bQr 00:07:46.324 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:46.324 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:46.324 + tmp_file_2=/tmp/spdk_tgt_config.json.IsT 00:07:46.324 + ret=0 00:07:46.324 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:46.582 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:46.840 + diff -u /tmp/62.bQr /tmp/spdk_tgt_config.json.IsT 00:07:46.840 + ret=1 00:07:46.840 + echo '=== Start of file: /tmp/62.bQr ===' 00:07:46.840 + cat /tmp/62.bQr 00:07:46.840 + echo '=== End of file: /tmp/62.bQr ===' 00:07:46.840 + echo '' 00:07:46.840 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IsT ===' 00:07:46.840 + cat /tmp/spdk_tgt_config.json.IsT 00:07:46.840 + echo '=== End of file: /tmp/spdk_tgt_config.json.IsT ===' 00:07:46.840 + echo '' 00:07:46.840 + rm /tmp/62.bQr /tmp/spdk_tgt_config.json.IsT 00:07:46.840 + exit 1 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:46.840 INFO: configuration change detected. 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@324 -- # [[ -n 3050118 ]] 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:46.840 13:06:44 json_config -- json_config/json_config.sh@330 -- # killprocess 3050118 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@954 -- # '[' -z 3050118 ']' 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@958 -- # kill -0 3050118 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@959 -- # uname 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3050118 00:07:46.840 13:06:44 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.841 13:06:44 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.841 13:06:44 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3050118' 00:07:46.841 killing process with pid 3050118 00:07:46.841 13:06:44 json_config -- common/autotest_common.sh@973 -- # kill 3050118 00:07:46.841 13:06:44 json_config -- common/autotest_common.sh@978 -- # wait 3050118 00:07:48.742 13:06:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:48.742 13:06:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:48.742 13:06:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.742 13:06:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:48.742 13:06:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:48.742 13:06:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:48.742 INFO: Success 00:07:48.742 00:07:48.742 real 0m16.518s 00:07:48.742 user 0m18.001s 00:07:48.742 sys 0m2.762s 00:07:48.742 13:06:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.742 13:06:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:48.742 ************************************ 00:07:48.742 END TEST json_config 00:07:48.742 ************************************ 00:07:48.742 13:06:46 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:48.742 13:06:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.742 13:06:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.742 13:06:46 -- common/autotest_common.sh@10 -- # set +x 00:07:48.742 ************************************ 00:07:48.742 START TEST json_config_extra_key 00:07:48.742 ************************************ 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:48.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.742 --rc genhtml_branch_coverage=1 00:07:48.742 --rc genhtml_function_coverage=1 00:07:48.742 --rc genhtml_legend=1 00:07:48.742 --rc geninfo_all_blocks=1 00:07:48.742 --rc geninfo_unexecuted_blocks=1 00:07:48.742 00:07:48.742 ' 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:48.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.742 --rc genhtml_branch_coverage=1 00:07:48.742 --rc genhtml_function_coverage=1 00:07:48.742 --rc genhtml_legend=1 00:07:48.742 --rc geninfo_all_blocks=1 00:07:48.742 --rc geninfo_unexecuted_blocks=1 00:07:48.742 00:07:48.742 ' 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:48.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.742 --rc genhtml_branch_coverage=1 00:07:48.742 --rc genhtml_function_coverage=1 00:07:48.742 --rc genhtml_legend=1 00:07:48.742 --rc geninfo_all_blocks=1 00:07:48.742 --rc geninfo_unexecuted_blocks=1 00:07:48.742 00:07:48.742 ' 00:07:48.742 13:06:46 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:48.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.742 --rc genhtml_branch_coverage=1 00:07:48.742 --rc genhtml_function_coverage=1 00:07:48.742 --rc genhtml_legend=1 00:07:48.742 --rc geninfo_all_blocks=1 00:07:48.742 --rc geninfo_unexecuted_blocks=1 00:07:48.742 00:07:48.742 ' 00:07:48.742 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.742 13:06:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.742 13:06:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.742 13:06:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.742 13:06:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.742 13:06:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:48.742 13:06:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.742 13:06:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:48.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:48.743 13:06:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:48.743 INFO: launching applications... 00:07:48.743 13:06:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3051045 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:48.743 Waiting for target to run... 00:07:48.743 13:06:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3051045 /var/tmp/spdk_tgt.sock 00:07:48.743 13:06:46 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3051045 ']' 00:07:48.743 13:06:46 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:48.743 13:06:46 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.743 13:06:46 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:48.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:48.743 13:06:46 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.743 13:06:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:48.743 [2024-11-25 13:06:46.243417] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:48.743 [2024-11-25 13:06:46.243508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051045 ] 00:07:49.311 [2024-11-25 13:06:46.797007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.311 [2024-11-25 13:06:46.848546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.570 13:06:47 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.570 13:06:47 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:49.570 13:06:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:49.570 00:07:49.570 13:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:49.570 INFO: shutting down applications... 00:07:49.570 13:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:49.570 13:06:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:49.570 13:06:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:49.570 13:06:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3051045 ]] 00:07:49.570 13:06:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3051045 00:07:49.570 13:06:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:49.571 13:06:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:49.571 13:06:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3051045 00:07:49.571 13:06:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:50.141 13:06:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:50.141 13:06:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:50.141 13:06:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3051045 00:07:50.141 13:06:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:50.141 13:06:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:50.141 13:06:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:50.141 13:06:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:50.141 SPDK target shutdown done 00:07:50.141 13:06:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:50.141 Success 00:07:50.141 00:07:50.141 real 0m1.694s 00:07:50.141 user 0m1.487s 00:07:50.141 sys 0m0.683s 00:07:50.141 13:06:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.141 13:06:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:50.141 ************************************ 00:07:50.141 END TEST json_config_extra_key 00:07:50.141 ************************************ 00:07:50.141 13:06:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:50.141 13:06:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.141 13:06:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.141 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:07:50.141 ************************************ 00:07:50.141 START TEST alias_rpc 00:07:50.141 ************************************ 00:07:50.141 13:06:47 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:50.400 * Looking for test storage... 00:07:50.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.400 13:06:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.400 --rc genhtml_branch_coverage=1 00:07:50.400 --rc genhtml_function_coverage=1 00:07:50.400 --rc genhtml_legend=1 00:07:50.400 --rc geninfo_all_blocks=1 00:07:50.400 --rc geninfo_unexecuted_blocks=1 00:07:50.400 00:07:50.400 ' 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.400 --rc genhtml_branch_coverage=1 00:07:50.400 --rc genhtml_function_coverage=1 00:07:50.400 --rc genhtml_legend=1 00:07:50.400 --rc geninfo_all_blocks=1 00:07:50.400 --rc geninfo_unexecuted_blocks=1 00:07:50.400 00:07:50.400 ' 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.400 --rc genhtml_branch_coverage=1 00:07:50.400 --rc genhtml_function_coverage=1 00:07:50.400 --rc genhtml_legend=1 00:07:50.400 --rc geninfo_all_blocks=1 00:07:50.400 --rc geninfo_unexecuted_blocks=1 00:07:50.400 00:07:50.400 ' 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.400 --rc genhtml_branch_coverage=1 00:07:50.400 --rc genhtml_function_coverage=1 00:07:50.400 --rc genhtml_legend=1 00:07:50.400 --rc geninfo_all_blocks=1 00:07:50.400 --rc geninfo_unexecuted_blocks=1 00:07:50.400 00:07:50.400 ' 00:07:50.400 13:06:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:50.400 13:06:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3051364 00:07:50.400 13:06:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:50.400 13:06:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3051364 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3051364 ']' 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.400 13:06:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.400 [2024-11-25 13:06:47.968338] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:50.400 [2024-11-25 13:06:47.968428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051364 ] 00:07:50.400 [2024-11-25 13:06:48.033578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.658 [2024-11-25 13:06:48.091897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.916 13:06:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.916 13:06:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:50.916 13:06:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:51.174 13:06:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3051364 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3051364 ']' 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3051364 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051364 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051364' 00:07:51.174 killing process with pid 3051364 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 3051364 00:07:51.174 13:06:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 3051364 00:07:51.738 00:07:51.738 real 0m1.329s 00:07:51.738 user 0m1.456s 00:07:51.738 sys 0m0.439s 00:07:51.738 13:06:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.738 13:06:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.738 ************************************ 00:07:51.738 END TEST alias_rpc 00:07:51.738 ************************************ 00:07:51.738 13:06:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:51.738 13:06:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:51.738 13:06:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.738 13:06:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.738 13:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:51.738 ************************************ 00:07:51.738 START TEST spdkcli_tcp 00:07:51.738 ************************************ 00:07:51.738 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:51.738 * Looking for test storage... 00:07:51.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.739 13:06:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.739 --rc genhtml_branch_coverage=1 00:07:51.739 --rc genhtml_function_coverage=1 00:07:51.739 --rc genhtml_legend=1 00:07:51.739 --rc geninfo_all_blocks=1 00:07:51.739 --rc geninfo_unexecuted_blocks=1 00:07:51.739 00:07:51.739 ' 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.739 --rc genhtml_branch_coverage=1 00:07:51.739 --rc genhtml_function_coverage=1 00:07:51.739 --rc genhtml_legend=1 00:07:51.739 --rc geninfo_all_blocks=1 00:07:51.739 --rc geninfo_unexecuted_blocks=1 00:07:51.739 00:07:51.739 ' 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.739 --rc genhtml_branch_coverage=1 00:07:51.739 --rc genhtml_function_coverage=1 00:07:51.739 --rc genhtml_legend=1 00:07:51.739 --rc geninfo_all_blocks=1 00:07:51.739 --rc geninfo_unexecuted_blocks=1 00:07:51.739 00:07:51.739 ' 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.739 --rc genhtml_branch_coverage=1 00:07:51.739 --rc genhtml_function_coverage=1 00:07:51.739 --rc genhtml_legend=1 00:07:51.739 --rc geninfo_all_blocks=1 00:07:51.739 --rc geninfo_unexecuted_blocks=1 00:07:51.739 00:07:51.739 ' 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3051557 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:51.739 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3051557 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3051557 ']' 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.739 13:06:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.739 [2024-11-25 13:06:49.370757] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:51.739 [2024-11-25 13:06:49.370833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051557 ] 00:07:51.996 [2024-11-25 13:06:49.436910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.996 [2024-11-25 13:06:49.499422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.996 [2024-11-25 13:06:49.499428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.255 13:06:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.255 13:06:49 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:52.255 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3051687 00:07:52.255 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:52.255 13:06:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:52.513 [ 00:07:52.513 "bdev_malloc_delete", 00:07:52.513 "bdev_malloc_create", 00:07:52.513 "bdev_null_resize", 00:07:52.513 "bdev_null_delete", 00:07:52.513 "bdev_null_create", 00:07:52.513 "bdev_nvme_cuse_unregister", 00:07:52.513 "bdev_nvme_cuse_register", 00:07:52.513 "bdev_opal_new_user", 00:07:52.513 "bdev_opal_set_lock_state", 00:07:52.513 "bdev_opal_delete", 00:07:52.513 "bdev_opal_get_info", 00:07:52.513 "bdev_opal_create", 00:07:52.513 "bdev_nvme_opal_revert", 00:07:52.513 "bdev_nvme_opal_init", 00:07:52.513 "bdev_nvme_send_cmd", 00:07:52.513 "bdev_nvme_set_keys", 00:07:52.513 "bdev_nvme_get_path_iostat", 00:07:52.513 "bdev_nvme_get_mdns_discovery_info", 00:07:52.513 "bdev_nvme_stop_mdns_discovery", 00:07:52.513 "bdev_nvme_start_mdns_discovery", 00:07:52.513 "bdev_nvme_set_multipath_policy", 00:07:52.513 "bdev_nvme_set_preferred_path", 00:07:52.513 "bdev_nvme_get_io_paths", 00:07:52.513 "bdev_nvme_remove_error_injection", 00:07:52.513 "bdev_nvme_add_error_injection", 00:07:52.513 "bdev_nvme_get_discovery_info", 00:07:52.513 "bdev_nvme_stop_discovery", 00:07:52.513 "bdev_nvme_start_discovery", 00:07:52.513 "bdev_nvme_get_controller_health_info", 00:07:52.513 "bdev_nvme_disable_controller", 00:07:52.513 "bdev_nvme_enable_controller", 00:07:52.513 "bdev_nvme_reset_controller", 00:07:52.513 "bdev_nvme_get_transport_statistics", 00:07:52.513 "bdev_nvme_apply_firmware", 00:07:52.513 "bdev_nvme_detach_controller", 00:07:52.513 "bdev_nvme_get_controllers", 00:07:52.513 "bdev_nvme_attach_controller", 00:07:52.513 "bdev_nvme_set_hotplug", 00:07:52.513 "bdev_nvme_set_options", 00:07:52.513 "bdev_passthru_delete", 00:07:52.513 "bdev_passthru_create", 00:07:52.513 "bdev_lvol_set_parent_bdev", 00:07:52.513 "bdev_lvol_set_parent", 00:07:52.513 "bdev_lvol_check_shallow_copy", 00:07:52.513 "bdev_lvol_start_shallow_copy", 00:07:52.513 "bdev_lvol_grow_lvstore", 00:07:52.513 "bdev_lvol_get_lvols", 00:07:52.513 "bdev_lvol_get_lvstores", 00:07:52.513 "bdev_lvol_delete", 00:07:52.513 "bdev_lvol_set_read_only", 00:07:52.513 "bdev_lvol_resize", 00:07:52.513 "bdev_lvol_decouple_parent", 00:07:52.513 "bdev_lvol_inflate", 00:07:52.513 "bdev_lvol_rename", 00:07:52.513 "bdev_lvol_clone_bdev", 00:07:52.513 "bdev_lvol_clone", 00:07:52.513 "bdev_lvol_snapshot", 00:07:52.513 "bdev_lvol_create", 00:07:52.513 "bdev_lvol_delete_lvstore", 00:07:52.513 "bdev_lvol_rename_lvstore", 00:07:52.513 "bdev_lvol_create_lvstore", 00:07:52.513 "bdev_raid_set_options", 00:07:52.513 "bdev_raid_remove_base_bdev", 00:07:52.513 "bdev_raid_add_base_bdev", 00:07:52.513 "bdev_raid_delete", 00:07:52.513 "bdev_raid_create", 00:07:52.513 "bdev_raid_get_bdevs", 00:07:52.513 "bdev_error_inject_error", 00:07:52.513 "bdev_error_delete", 00:07:52.513 "bdev_error_create", 00:07:52.513 "bdev_split_delete", 00:07:52.513 "bdev_split_create", 00:07:52.513 "bdev_delay_delete", 00:07:52.513 "bdev_delay_create", 00:07:52.513 "bdev_delay_update_latency", 00:07:52.513 "bdev_zone_block_delete", 00:07:52.513 "bdev_zone_block_create", 00:07:52.513 "blobfs_create", 00:07:52.513 "blobfs_detect", 00:07:52.513 "blobfs_set_cache_size", 00:07:52.513 "bdev_aio_delete", 00:07:52.513 "bdev_aio_rescan", 00:07:52.513 "bdev_aio_create", 00:07:52.513 "bdev_ftl_set_property", 00:07:52.513 "bdev_ftl_get_properties", 00:07:52.513 "bdev_ftl_get_stats", 00:07:52.513 "bdev_ftl_unmap", 00:07:52.513 "bdev_ftl_unload", 00:07:52.513 "bdev_ftl_delete", 00:07:52.513 "bdev_ftl_load", 00:07:52.513 "bdev_ftl_create", 00:07:52.513 "bdev_virtio_attach_controller", 00:07:52.513 "bdev_virtio_scsi_get_devices", 00:07:52.513 "bdev_virtio_detach_controller", 00:07:52.513 "bdev_virtio_blk_set_hotplug", 00:07:52.513 "bdev_iscsi_delete", 00:07:52.513 "bdev_iscsi_create", 00:07:52.513 "bdev_iscsi_set_options", 00:07:52.513 "accel_error_inject_error", 00:07:52.513 "ioat_scan_accel_module", 00:07:52.513 "dsa_scan_accel_module", 00:07:52.513 "iaa_scan_accel_module", 00:07:52.513 "vfu_virtio_create_fs_endpoint", 00:07:52.513 "vfu_virtio_create_scsi_endpoint", 00:07:52.513 "vfu_virtio_scsi_remove_target", 00:07:52.513 "vfu_virtio_scsi_add_target", 00:07:52.513 "vfu_virtio_create_blk_endpoint", 00:07:52.513 "vfu_virtio_delete_endpoint", 00:07:52.513 "keyring_file_remove_key", 00:07:52.513 "keyring_file_add_key", 00:07:52.513 "keyring_linux_set_options", 00:07:52.513 "fsdev_aio_delete", 00:07:52.513 "fsdev_aio_create", 00:07:52.513 "iscsi_get_histogram", 00:07:52.513 "iscsi_enable_histogram", 00:07:52.513 "iscsi_set_options", 00:07:52.513 "iscsi_get_auth_groups", 00:07:52.513 "iscsi_auth_group_remove_secret", 00:07:52.513 "iscsi_auth_group_add_secret", 00:07:52.513 "iscsi_delete_auth_group", 00:07:52.513 "iscsi_create_auth_group", 00:07:52.513 "iscsi_set_discovery_auth", 00:07:52.513 "iscsi_get_options", 00:07:52.513 "iscsi_target_node_request_logout", 00:07:52.513 "iscsi_target_node_set_redirect", 00:07:52.513 "iscsi_target_node_set_auth", 00:07:52.513 "iscsi_target_node_add_lun", 00:07:52.513 "iscsi_get_stats", 00:07:52.513 "iscsi_get_connections", 00:07:52.513 "iscsi_portal_group_set_auth", 00:07:52.513 "iscsi_start_portal_group", 00:07:52.513 "iscsi_delete_portal_group", 00:07:52.513 "iscsi_create_portal_group", 00:07:52.513 "iscsi_get_portal_groups", 00:07:52.513 "iscsi_delete_target_node", 00:07:52.513 "iscsi_target_node_remove_pg_ig_maps", 00:07:52.513 "iscsi_target_node_add_pg_ig_maps", 00:07:52.513 "iscsi_create_target_node", 00:07:52.513 "iscsi_get_target_nodes", 00:07:52.513 "iscsi_delete_initiator_group", 00:07:52.513 "iscsi_initiator_group_remove_initiators", 00:07:52.513 "iscsi_initiator_group_add_initiators", 00:07:52.513 "iscsi_create_initiator_group", 00:07:52.513 "iscsi_get_initiator_groups", 00:07:52.513 "nvmf_set_crdt", 00:07:52.513 "nvmf_set_config", 00:07:52.513 "nvmf_set_max_subsystems", 00:07:52.513 "nvmf_stop_mdns_prr", 00:07:52.513 "nvmf_publish_mdns_prr", 00:07:52.513 "nvmf_subsystem_get_listeners", 00:07:52.513 "nvmf_subsystem_get_qpairs", 00:07:52.513 "nvmf_subsystem_get_controllers", 00:07:52.513 "nvmf_get_stats", 00:07:52.513 "nvmf_get_transports", 00:07:52.513 "nvmf_create_transport", 00:07:52.513 "nvmf_get_targets", 00:07:52.513 "nvmf_delete_target", 00:07:52.513 "nvmf_create_target", 00:07:52.513 "nvmf_subsystem_allow_any_host", 00:07:52.513 "nvmf_subsystem_set_keys", 00:07:52.513 "nvmf_subsystem_remove_host", 00:07:52.513 "nvmf_subsystem_add_host", 00:07:52.513 "nvmf_ns_remove_host", 00:07:52.513 "nvmf_ns_add_host", 00:07:52.513 "nvmf_subsystem_remove_ns", 00:07:52.513 "nvmf_subsystem_set_ns_ana_group", 00:07:52.513 "nvmf_subsystem_add_ns", 00:07:52.514 "nvmf_subsystem_listener_set_ana_state", 00:07:52.514 "nvmf_discovery_get_referrals", 00:07:52.514 "nvmf_discovery_remove_referral", 00:07:52.514 "nvmf_discovery_add_referral", 00:07:52.514 "nvmf_subsystem_remove_listener", 00:07:52.514 "nvmf_subsystem_add_listener", 00:07:52.514 "nvmf_delete_subsystem", 00:07:52.514 "nvmf_create_subsystem", 00:07:52.514 "nvmf_get_subsystems", 00:07:52.514 "env_dpdk_get_mem_stats", 00:07:52.514 "nbd_get_disks", 00:07:52.514 "nbd_stop_disk", 00:07:52.514 "nbd_start_disk", 00:07:52.514 "ublk_recover_disk", 00:07:52.514 "ublk_get_disks", 00:07:52.514 "ublk_stop_disk", 00:07:52.514 "ublk_start_disk", 00:07:52.514 "ublk_destroy_target", 00:07:52.514 "ublk_create_target", 00:07:52.514 "virtio_blk_create_transport", 00:07:52.514 "virtio_blk_get_transports", 00:07:52.514 "vhost_controller_set_coalescing", 00:07:52.514 "vhost_get_controllers", 00:07:52.514 "vhost_delete_controller", 00:07:52.514 "vhost_create_blk_controller", 00:07:52.514 "vhost_scsi_controller_remove_target", 00:07:52.514 "vhost_scsi_controller_add_target", 00:07:52.514 "vhost_start_scsi_controller", 00:07:52.514 "vhost_create_scsi_controller", 00:07:52.514 "thread_set_cpumask", 00:07:52.514 "scheduler_set_options", 00:07:52.514 "framework_get_governor", 00:07:52.514 "framework_get_scheduler", 00:07:52.514 "framework_set_scheduler", 00:07:52.514 "framework_get_reactors", 00:07:52.514 "thread_get_io_channels", 00:07:52.514 "thread_get_pollers", 00:07:52.514 "thread_get_stats", 00:07:52.514 "framework_monitor_context_switch", 00:07:52.514 "spdk_kill_instance", 00:07:52.514 "log_enable_timestamps", 00:07:52.514 "log_get_flags", 00:07:52.514 "log_clear_flag", 00:07:52.514 "log_set_flag", 00:07:52.514 "log_get_level", 00:07:52.514 "log_set_level", 00:07:52.514 "log_get_print_level", 00:07:52.514 "log_set_print_level", 00:07:52.514 "framework_enable_cpumask_locks", 00:07:52.514 "framework_disable_cpumask_locks", 00:07:52.514 "framework_wait_init", 00:07:52.514 "framework_start_init", 00:07:52.514 "scsi_get_devices", 00:07:52.514 "bdev_get_histogram", 00:07:52.514 "bdev_enable_histogram", 00:07:52.514 "bdev_set_qos_limit", 00:07:52.514 "bdev_set_qd_sampling_period", 00:07:52.514 "bdev_get_bdevs", 00:07:52.514 "bdev_reset_iostat", 00:07:52.514 "bdev_get_iostat", 00:07:52.514 "bdev_examine", 00:07:52.514 "bdev_wait_for_examine", 00:07:52.514 "bdev_set_options", 00:07:52.514 "accel_get_stats", 00:07:52.514 "accel_set_options", 00:07:52.514 "accel_set_driver", 00:07:52.514 "accel_crypto_key_destroy", 00:07:52.514 "accel_crypto_keys_get", 00:07:52.514 "accel_crypto_key_create", 00:07:52.514 "accel_assign_opc", 00:07:52.514 "accel_get_module_info", 00:07:52.514 "accel_get_opc_assignments", 00:07:52.514 "vmd_rescan", 00:07:52.514 "vmd_remove_device", 00:07:52.514 "vmd_enable", 00:07:52.514 "sock_get_default_impl", 00:07:52.514 "sock_set_default_impl", 00:07:52.514 "sock_impl_set_options", 00:07:52.514 "sock_impl_get_options", 00:07:52.514 "iobuf_get_stats", 00:07:52.514 "iobuf_set_options", 00:07:52.514 "keyring_get_keys", 00:07:52.514 "vfu_tgt_set_base_path", 00:07:52.514 "framework_get_pci_devices", 00:07:52.514 "framework_get_config", 00:07:52.514 "framework_get_subsystems", 00:07:52.514 "fsdev_set_opts", 00:07:52.514 "fsdev_get_opts", 00:07:52.514 "trace_get_info", 00:07:52.514 "trace_get_tpoint_group_mask", 00:07:52.514 "trace_disable_tpoint_group", 00:07:52.514 "trace_enable_tpoint_group", 00:07:52.514 "trace_clear_tpoint_mask", 00:07:52.514 "trace_set_tpoint_mask", 00:07:52.514 "notify_get_notifications", 00:07:52.514 "notify_get_types", 00:07:52.514 "spdk_get_version", 00:07:52.514 "rpc_get_methods" 00:07:52.514 ] 00:07:52.514 13:06:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.514 13:06:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:52.514 13:06:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3051557 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3051557 ']' 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3051557 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051557 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051557' 00:07:52.514 killing process with pid 3051557 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3051557 00:07:52.514 13:06:50 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3051557 00:07:53.082 00:07:53.082 real 0m1.380s 00:07:53.082 user 0m2.450s 00:07:53.082 sys 0m0.481s 00:07:53.082 13:06:50 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.082 13:06:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.082 ************************************ 00:07:53.082 END TEST spdkcli_tcp 00:07:53.082 ************************************ 00:07:53.082 13:06:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:53.082 13:06:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.082 13:06:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.082 13:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:53.082 ************************************ 00:07:53.082 START TEST dpdk_mem_utility 00:07:53.082 ************************************ 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:53.082 * Looking for test storage... 00:07:53.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.082 13:06:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:53.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.082 --rc genhtml_branch_coverage=1 00:07:53.082 --rc genhtml_function_coverage=1 00:07:53.082 --rc genhtml_legend=1 00:07:53.082 --rc geninfo_all_blocks=1 00:07:53.082 --rc geninfo_unexecuted_blocks=1 00:07:53.082 00:07:53.082 ' 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:53.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.082 --rc genhtml_branch_coverage=1 00:07:53.082 --rc genhtml_function_coverage=1 00:07:53.082 --rc genhtml_legend=1 00:07:53.082 --rc geninfo_all_blocks=1 00:07:53.082 --rc geninfo_unexecuted_blocks=1 00:07:53.082 00:07:53.082 ' 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:53.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.082 --rc genhtml_branch_coverage=1 00:07:53.082 --rc genhtml_function_coverage=1 00:07:53.082 --rc genhtml_legend=1 00:07:53.082 --rc geninfo_all_blocks=1 00:07:53.082 --rc geninfo_unexecuted_blocks=1 00:07:53.082 00:07:53.082 ' 00:07:53.082 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:53.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.082 --rc genhtml_branch_coverage=1 00:07:53.082 --rc genhtml_function_coverage=1 00:07:53.082 --rc genhtml_legend=1 00:07:53.082 --rc geninfo_all_blocks=1 00:07:53.082 --rc geninfo_unexecuted_blocks=1 00:07:53.082 00:07:53.082 ' 00:07:53.082 13:06:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:53.340 13:06:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3051870 00:07:53.340 13:06:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:53.340 13:06:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3051870 00:07:53.340 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3051870 ']' 00:07:53.340 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.340 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.340 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.340 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.340 13:06:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.340 [2024-11-25 13:06:50.797121] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:53.340 [2024-11-25 13:06:50.797212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051870 ] 00:07:53.340 [2024-11-25 13:06:50.863368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.340 [2024-11-25 13:06:50.920258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.598 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.598 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:53.598 13:06:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:53.598 13:06:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:53.598 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.598 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:53.598 { 00:07:53.598 "filename": "/tmp/spdk_mem_dump.txt" 00:07:53.598 } 00:07:53.598 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.598 13:06:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:53.598 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:53.598 1 heaps totaling size 810.000000 MiB 00:07:53.598 size: 810.000000 MiB heap id: 0 00:07:53.598 end heaps---------- 00:07:53.598 9 mempools totaling size 595.772034 MiB 00:07:53.598 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:53.598 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:53.598 size: 92.545471 MiB name: bdev_io_3051870 00:07:53.598 size: 50.003479 MiB name: msgpool_3051870 00:07:53.598 size: 36.509338 MiB name: fsdev_io_3051870 00:07:53.598 size: 21.763794 MiB name: PDU_Pool 00:07:53.598 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:53.598 size: 4.133484 MiB name: evtpool_3051870 00:07:53.598 size: 0.026123 MiB name: Session_Pool 00:07:53.598 end mempools------- 00:07:53.598 6 memzones totaling size 4.142822 MiB 00:07:53.598 size: 1.000366 MiB name: RG_ring_0_3051870 00:07:53.598 size: 1.000366 MiB name: RG_ring_1_3051870 00:07:53.598 size: 1.000366 MiB name: RG_ring_4_3051870 00:07:53.598 size: 1.000366 MiB name: RG_ring_5_3051870 00:07:53.598 size: 0.125366 MiB name: RG_ring_2_3051870 00:07:53.598 size: 0.015991 MiB name: RG_ring_3_3051870 00:07:53.598 end memzones------- 00:07:53.598 13:06:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:53.856 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:53.856 list of free elements. size: 10.862488 MiB 00:07:53.856 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:53.856 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:53.856 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:53.856 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:53.856 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:53.856 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:53.856 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:53.856 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:53.856 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:53.856 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:53.856 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:53.856 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:53.856 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:53.856 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:53.856 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:53.856 list of standard malloc elements. size: 199.218628 MiB 00:07:53.856 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:53.856 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:53.856 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:53.856 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:53.856 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:53.856 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:53.856 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:53.856 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:53.856 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:53.857 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:53.857 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:53.857 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:53.857 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:53.857 list of memzone associated elements. size: 599.918884 MiB 00:07:53.857 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:53.857 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:53.857 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:53.857 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:53.857 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:53.857 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3051870_0 00:07:53.857 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:53.857 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3051870_0 00:07:53.857 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:53.857 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3051870_0 00:07:53.857 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:53.857 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:53.857 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:53.857 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:53.857 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:53.857 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3051870_0 00:07:53.857 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:53.857 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3051870 00:07:53.857 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:53.857 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3051870 00:07:53.857 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:53.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:53.857 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:53.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:53.857 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:53.857 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:53.857 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:53.857 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:53.857 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:53.857 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3051870 00:07:53.857 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:53.857 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3051870 00:07:53.857 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:53.857 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3051870 00:07:53.857 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:53.857 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3051870 00:07:53.857 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:53.857 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3051870 00:07:53.857 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:53.857 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3051870 00:07:53.857 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:53.857 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:53.857 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:53.857 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:53.857 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:53.857 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:53.857 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:53.857 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3051870 00:07:53.857 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:53.857 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3051870 00:07:53.857 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:53.857 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:53.857 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:53.857 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:53.857 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:53.857 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3051870 00:07:53.857 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:53.857 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:53.857 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:53.857 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3051870 00:07:53.857 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:53.857 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3051870 00:07:53.857 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:53.857 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3051870 00:07:53.857 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:53.857 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:53.857 13:06:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:53.857 13:06:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3051870 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3051870 ']' 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3051870 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051870 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051870' 00:07:53.857 killing process with pid 3051870 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3051870 00:07:53.857 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3051870 00:07:54.115 00:07:54.115 real 0m1.159s 00:07:54.115 user 0m1.148s 00:07:54.115 sys 0m0.407s 00:07:54.115 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.115 13:06:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:54.115 ************************************ 00:07:54.115 END TEST dpdk_mem_utility 00:07:54.115 ************************************ 00:07:54.374 13:06:51 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:54.374 13:06:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.374 13:06:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.374 13:06:51 -- common/autotest_common.sh@10 -- # set +x 00:07:54.374 ************************************ 00:07:54.374 START TEST event 00:07:54.374 ************************************ 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:54.374 * Looking for test storage... 00:07:54.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.374 13:06:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.374 13:06:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.374 13:06:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.374 13:06:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.374 13:06:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.374 13:06:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.374 13:06:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.374 13:06:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.374 13:06:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.374 13:06:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.374 13:06:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.374 13:06:51 event -- scripts/common.sh@344 -- # case "$op" in 00:07:54.374 13:06:51 event -- scripts/common.sh@345 -- # : 1 00:07:54.374 13:06:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.374 13:06:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.374 13:06:51 event -- scripts/common.sh@365 -- # decimal 1 00:07:54.374 13:06:51 event -- scripts/common.sh@353 -- # local d=1 00:07:54.374 13:06:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.374 13:06:51 event -- scripts/common.sh@355 -- # echo 1 00:07:54.374 13:06:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.374 13:06:51 event -- scripts/common.sh@366 -- # decimal 2 00:07:54.374 13:06:51 event -- scripts/common.sh@353 -- # local d=2 00:07:54.374 13:06:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.374 13:06:51 event -- scripts/common.sh@355 -- # echo 2 00:07:54.374 13:06:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.374 13:06:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.374 13:06:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.374 13:06:51 event -- scripts/common.sh@368 -- # return 0 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.374 --rc genhtml_branch_coverage=1 00:07:54.374 --rc genhtml_function_coverage=1 00:07:54.374 --rc genhtml_legend=1 00:07:54.374 --rc geninfo_all_blocks=1 00:07:54.374 --rc geninfo_unexecuted_blocks=1 00:07:54.374 00:07:54.374 ' 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.374 --rc genhtml_branch_coverage=1 00:07:54.374 --rc genhtml_function_coverage=1 00:07:54.374 --rc genhtml_legend=1 00:07:54.374 --rc geninfo_all_blocks=1 00:07:54.374 --rc geninfo_unexecuted_blocks=1 00:07:54.374 00:07:54.374 ' 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.374 --rc genhtml_branch_coverage=1 00:07:54.374 --rc genhtml_function_coverage=1 00:07:54.374 --rc genhtml_legend=1 00:07:54.374 --rc geninfo_all_blocks=1 00:07:54.374 --rc geninfo_unexecuted_blocks=1 00:07:54.374 00:07:54.374 ' 00:07:54.374 13:06:51 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.374 --rc genhtml_branch_coverage=1 00:07:54.374 --rc genhtml_function_coverage=1 00:07:54.374 --rc genhtml_legend=1 00:07:54.374 --rc geninfo_all_blocks=1 00:07:54.374 --rc geninfo_unexecuted_blocks=1 00:07:54.374 00:07:54.374 ' 00:07:54.374 13:06:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:54.374 13:06:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:54.374 13:06:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:54.375 13:06:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:54.375 13:06:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.375 13:06:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:54.375 ************************************ 00:07:54.375 START TEST event_perf 00:07:54.375 ************************************ 00:07:54.375 13:06:51 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:54.375 Running I/O for 1 seconds...[2024-11-25 13:06:51.993086] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:54.375 [2024-11-25 13:06:51.993152] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052089 ] 00:07:54.633 [2024-11-25 13:06:52.060812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.633 [2024-11-25 13:06:52.121325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.633 [2024-11-25 13:06:52.121382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.633 [2024-11-25 13:06:52.121445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.633 [2024-11-25 13:06:52.121448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.565 Running I/O for 1 seconds... 00:07:55.565 lcore 0: 231570 00:07:55.565 lcore 1: 231569 00:07:55.565 lcore 2: 231570 00:07:55.565 lcore 3: 231569 00:07:55.565 done. 00:07:55.565 00:07:55.565 real 0m1.207s 00:07:55.565 user 0m4.126s 00:07:55.565 sys 0m0.077s 00:07:55.565 13:06:53 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.565 13:06:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:55.565 ************************************ 00:07:55.565 END TEST event_perf 00:07:55.565 ************************************ 00:07:55.565 13:06:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:55.565 13:06:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:55.565 13:06:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.565 13:06:53 event -- common/autotest_common.sh@10 -- # set +x 00:07:55.823 ************************************ 00:07:55.823 START TEST event_reactor 00:07:55.823 ************************************ 00:07:55.823 13:06:53 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:55.823 [2024-11-25 13:06:53.248400] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:55.823 [2024-11-25 13:06:53.248459] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052244 ] 00:07:55.823 [2024-11-25 13:06:53.312863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.823 [2024-11-25 13:06:53.367353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.775 test_start 00:07:56.776 oneshot 00:07:56.776 tick 100 00:07:56.776 tick 100 00:07:56.776 tick 250 00:07:56.776 tick 100 00:07:56.776 tick 100 00:07:56.776 tick 100 00:07:56.776 tick 250 00:07:56.776 tick 500 00:07:56.776 tick 100 00:07:56.776 tick 100 00:07:56.776 tick 250 00:07:56.776 tick 100 00:07:56.776 tick 100 00:07:56.776 test_end 00:07:56.776 00:07:56.776 real 0m1.192s 00:07:56.776 user 0m1.121s 00:07:56.776 sys 0m0.067s 00:07:56.776 13:06:54 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.776 13:06:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:56.776 ************************************ 00:07:56.776 END TEST event_reactor 00:07:56.776 ************************************ 00:07:57.033 13:06:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:57.033 13:06:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:57.033 13:06:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.033 13:06:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:57.033 ************************************ 00:07:57.033 START TEST event_reactor_perf 00:07:57.033 ************************************ 00:07:57.033 13:06:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:57.033 [2024-11-25 13:06:54.489672] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:57.033 [2024-11-25 13:06:54.489735] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052403 ] 00:07:57.033 [2024-11-25 13:06:54.554464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.033 [2024-11-25 13:06:54.611371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.019 test_start 00:07:58.019 test_end 00:07:58.019 Performance: 452213 events per second 00:07:58.019 00:07:58.019 real 0m1.198s 00:07:58.019 user 0m1.130s 00:07:58.019 sys 0m0.064s 00:07:58.019 13:06:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.019 13:06:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:58.019 ************************************ 00:07:58.019 END TEST event_reactor_perf 00:07:58.019 ************************************ 00:07:58.277 13:06:55 event -- event/event.sh@49 -- # uname -s 00:07:58.277 13:06:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:58.277 13:06:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:58.277 13:06:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.277 13:06:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.277 13:06:55 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.277 ************************************ 00:07:58.277 START TEST event_scheduler 00:07:58.277 ************************************ 00:07:58.277 13:06:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:58.277 * Looking for test storage... 00:07:58.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:58.277 13:06:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.277 13:06:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.277 13:06:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.277 13:06:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.277 13:06:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.278 13:06:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.278 --rc genhtml_branch_coverage=1 00:07:58.278 --rc genhtml_function_coverage=1 00:07:58.278 --rc genhtml_legend=1 00:07:58.278 --rc geninfo_all_blocks=1 00:07:58.278 --rc geninfo_unexecuted_blocks=1 00:07:58.278 00:07:58.278 ' 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.278 --rc genhtml_branch_coverage=1 00:07:58.278 --rc genhtml_function_coverage=1 00:07:58.278 --rc genhtml_legend=1 00:07:58.278 --rc geninfo_all_blocks=1 00:07:58.278 --rc geninfo_unexecuted_blocks=1 00:07:58.278 00:07:58.278 ' 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.278 --rc genhtml_branch_coverage=1 00:07:58.278 --rc genhtml_function_coverage=1 00:07:58.278 --rc genhtml_legend=1 00:07:58.278 --rc geninfo_all_blocks=1 00:07:58.278 --rc geninfo_unexecuted_blocks=1 00:07:58.278 00:07:58.278 ' 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.278 --rc genhtml_branch_coverage=1 00:07:58.278 --rc genhtml_function_coverage=1 00:07:58.278 --rc genhtml_legend=1 00:07:58.278 --rc geninfo_all_blocks=1 00:07:58.278 --rc geninfo_unexecuted_blocks=1 00:07:58.278 00:07:58.278 ' 00:07:58.278 13:06:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:58.278 13:06:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3052595 00:07:58.278 13:06:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:58.278 13:06:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:58.278 13:06:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3052595 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3052595 ']' 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.278 13:06:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:58.278 [2024-11-25 13:06:55.909918] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:07:58.278 [2024-11-25 13:06:55.910003] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052595 ] 00:07:58.535 [2024-11-25 13:06:55.979489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.535 [2024-11-25 13:06:56.040976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.535 [2024-11-25 13:06:56.041081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.535 [2024-11-25 13:06:56.041175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.535 [2024-11-25 13:06:56.041178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.535 13:06:56 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.535 13:06:56 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:58.535 13:06:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:58.535 13:06:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.535 13:06:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:58.535 [2024-11-25 13:06:56.138055] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:58.536 [2024-11-25 13:06:56.138082] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:58.536 [2024-11-25 13:06:56.138114] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:58.536 [2024-11-25 13:06:56.138126] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:58.536 [2024-11-25 13:06:56.138136] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:58.536 13:06:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.536 13:06:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:58.536 13:06:56 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.536 13:06:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 [2024-11-25 13:06:56.242349] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:58.793 13:06:56 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.793 13:06:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:58.793 13:06:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.793 13:06:56 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.793 13:06:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 ************************************ 00:07:58.793 START TEST scheduler_create_thread 00:07:58.793 ************************************ 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 2 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 3 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 4 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 5 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 6 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 7 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.793 8 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:58.793 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.794 9 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.794 10 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.794 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.358 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.358 00:07:59.358 real 0m0.590s 00:07:59.358 user 0m0.009s 00:07:59.358 sys 0m0.006s 00:07:59.358 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.358 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.358 ************************************ 00:07:59.358 END TEST scheduler_create_thread 00:07:59.358 ************************************ 00:07:59.358 13:06:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:59.358 13:06:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3052595 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3052595 ']' 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3052595 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3052595 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3052595' 00:07:59.358 killing process with pid 3052595 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3052595 00:07:59.358 13:06:56 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3052595 00:07:59.923 [2024-11-25 13:06:57.342567] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:59.923 00:07:59.923 real 0m1.834s 00:07:59.923 user 0m2.437s 00:07:59.923 sys 0m0.374s 00:07:59.923 13:06:57 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.923 13:06:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:59.923 ************************************ 00:07:59.923 END TEST event_scheduler 00:07:59.923 ************************************ 00:08:00.181 13:06:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:00.181 13:06:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:00.181 13:06:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.181 13:06:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.181 13:06:57 event -- common/autotest_common.sh@10 -- # set +x 00:08:00.181 ************************************ 00:08:00.181 START TEST app_repeat 00:08:00.181 ************************************ 00:08:00.181 13:06:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3052903 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3052903' 00:08:00.181 Process app_repeat pid: 3052903 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:00.181 spdk_app_start Round 0 00:08:00.181 13:06:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3052903 /var/tmp/spdk-nbd.sock 00:08:00.181 13:06:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3052903 ']' 00:08:00.181 13:06:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.181 13:06:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.181 13:06:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.181 13:06:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.181 13:06:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:00.181 [2024-11-25 13:06:57.638518] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:00.182 [2024-11-25 13:06:57.638579] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052903 ] 00:08:00.182 [2024-11-25 13:06:57.704090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.182 [2024-11-25 13:06:57.764515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.182 [2024-11-25 13:06:57.764520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.439 13:06:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.439 13:06:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:00.439 13:06:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:00.698 Malloc0 00:08:00.698 13:06:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:00.956 Malloc1 00:08:00.956 13:06:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:00.956 13:06:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:01.214 /dev/nbd0 00:08:01.214 13:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.214 13:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:01.214 1+0 records in 00:08:01.214 1+0 records out 00:08:01.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152752 s, 26.8 MB/s 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.214 13:06:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:01.214 13:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.214 13:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.214 13:06:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:01.472 /dev/nbd1 00:08:01.472 13:06:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:01.472 13:06:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.472 13:06:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:01.731 1+0 records in 00:08:01.731 1+0 records out 00:08:01.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243578 s, 16.8 MB/s 00:08:01.731 13:06:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:01.731 13:06:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:01.731 13:06:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:01.731 13:06:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.731 13:06:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:01.731 13:06:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.731 13:06:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.731 13:06:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.731 13:06:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.731 13:06:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.988 13:06:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:01.988 { 00:08:01.988 "nbd_device": "/dev/nbd0", 00:08:01.988 "bdev_name": "Malloc0" 00:08:01.988 }, 00:08:01.988 { 00:08:01.988 "nbd_device": "/dev/nbd1", 00:08:01.988 "bdev_name": "Malloc1" 00:08:01.988 } 00:08:01.988 ]' 00:08:01.988 13:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:01.988 { 00:08:01.988 "nbd_device": "/dev/nbd0", 00:08:01.988 "bdev_name": "Malloc0" 00:08:01.988 }, 00:08:01.988 { 00:08:01.988 "nbd_device": "/dev/nbd1", 00:08:01.989 "bdev_name": "Malloc1" 00:08:01.989 } 00:08:01.989 ]' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:01.989 /dev/nbd1' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:01.989 /dev/nbd1' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:01.989 256+0 records in 00:08:01.989 256+0 records out 00:08:01.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00382802 s, 274 MB/s 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:01.989 256+0 records in 00:08:01.989 256+0 records out 00:08:01.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019858 s, 52.8 MB/s 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:01.989 256+0 records in 00:08:01.989 256+0 records out 00:08:01.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219993 s, 47.7 MB/s 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.989 13:06:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.246 13:06:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.504 13:07:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.762 13:07:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.762 13:07:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.762 13:07:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:03.020 13:07:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:03.020 13:07:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:03.278 13:07:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:03.536 [2024-11-25 13:07:00.958187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.536 [2024-11-25 13:07:01.013353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.536 [2024-11-25 13:07:01.013354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.536 [2024-11-25 13:07:01.069978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:03.536 [2024-11-25 13:07:01.070036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:06.117 13:07:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:06.117 13:07:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:06.117 spdk_app_start Round 1 00:08:06.117 13:07:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3052903 /var/tmp/spdk-nbd.sock 00:08:06.117 13:07:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3052903 ']' 00:08:06.117 13:07:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:06.117 13:07:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.117 13:07:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:06.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:06.117 13:07:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.117 13:07:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:06.375 13:07:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.375 13:07:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:06.375 13:07:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:06.633 Malloc0 00:08:06.891 13:07:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:07.149 Malloc1 00:08:07.149 13:07:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.149 13:07:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:07.150 13:07:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:07.150 13:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:07.150 13:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.150 13:07:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:07.407 /dev/nbd0 00:08:07.407 13:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:07.407 13:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:07.407 1+0 records in 00:08:07.407 1+0 records out 00:08:07.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174862 s, 23.4 MB/s 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.407 13:07:04 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:07.407 13:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.407 13:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.407 13:07:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:07.665 /dev/nbd1 00:08:07.665 13:07:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:07.665 13:07:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:07.665 1+0 records in 00:08:07.665 1+0 records out 00:08:07.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209383 s, 19.6 MB/s 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:07.665 13:07:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:07.665 13:07:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.665 13:07:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.665 13:07:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.665 13:07:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.665 13:07:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:07.924 { 00:08:07.924 "nbd_device": "/dev/nbd0", 00:08:07.924 "bdev_name": "Malloc0" 00:08:07.924 }, 00:08:07.924 { 00:08:07.924 "nbd_device": "/dev/nbd1", 00:08:07.924 "bdev_name": "Malloc1" 00:08:07.924 } 00:08:07.924 ]' 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:07.924 { 00:08:07.924 "nbd_device": "/dev/nbd0", 00:08:07.924 "bdev_name": "Malloc0" 00:08:07.924 }, 00:08:07.924 { 00:08:07.924 "nbd_device": "/dev/nbd1", 00:08:07.924 "bdev_name": "Malloc1" 00:08:07.924 } 00:08:07.924 ]' 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:07.924 /dev/nbd1' 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:07.924 /dev/nbd1' 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:07.924 256+0 records in 00:08:07.924 256+0 records out 00:08:07.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502571 s, 209 MB/s 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:07.924 256+0 records in 00:08:07.924 256+0 records out 00:08:07.924 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201681 s, 52.0 MB/s 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.924 13:07:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:08.182 256+0 records in 00:08:08.182 256+0 records out 00:08:08.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022222 s, 47.2 MB/s 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:08.182 13:07:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:08.183 13:07:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:08.183 13:07:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.183 13:07:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.440 13:07:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.698 13:07:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:08.955 13:07:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:08.955 13:07:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:09.213 13:07:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:09.470 [2024-11-25 13:07:07.004086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.470 [2024-11-25 13:07:07.059101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.470 [2024-11-25 13:07:07.059101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.470 [2024-11-25 13:07:07.119643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:09.470 [2024-11-25 13:07:07.119719] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:12.751 13:07:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:12.751 13:07:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:12.751 spdk_app_start Round 2 00:08:12.751 13:07:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3052903 /var/tmp/spdk-nbd.sock 00:08:12.751 13:07:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3052903 ']' 00:08:12.751 13:07:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:12.751 13:07:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.751 13:07:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:12.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:12.751 13:07:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.751 13:07:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:12.751 13:07:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.751 13:07:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:12.751 13:07:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:12.751 Malloc0 00:08:12.751 13:07:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:13.009 Malloc1 00:08:13.009 13:07:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:13.009 13:07:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:13.575 /dev/nbd0 00:08:13.575 13:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:13.575 13:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:13.575 1+0 records in 00:08:13.575 1+0 records out 00:08:13.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172323 s, 23.8 MB/s 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.575 13:07:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:13.575 13:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.575 13:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:13.575 13:07:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:13.833 /dev/nbd1 00:08:13.833 13:07:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:13.834 13:07:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:13.834 1+0 records in 00:08:13.834 1+0 records out 00:08:13.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233366 s, 17.6 MB/s 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:13.834 13:07:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:13.834 13:07:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:13.834 13:07:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:13.834 13:07:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.834 13:07:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.834 13:07:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:14.092 { 00:08:14.092 "nbd_device": "/dev/nbd0", 00:08:14.092 "bdev_name": "Malloc0" 00:08:14.092 }, 00:08:14.092 { 00:08:14.092 "nbd_device": "/dev/nbd1", 00:08:14.092 "bdev_name": "Malloc1" 00:08:14.092 } 00:08:14.092 ]' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:14.092 { 00:08:14.092 "nbd_device": "/dev/nbd0", 00:08:14.092 "bdev_name": "Malloc0" 00:08:14.092 }, 00:08:14.092 { 00:08:14.092 "nbd_device": "/dev/nbd1", 00:08:14.092 "bdev_name": "Malloc1" 00:08:14.092 } 00:08:14.092 ]' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:14.092 /dev/nbd1' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:14.092 /dev/nbd1' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:14.092 256+0 records in 00:08:14.092 256+0 records out 00:08:14.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504256 s, 208 MB/s 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:14.092 256+0 records in 00:08:14.092 256+0 records out 00:08:14.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197096 s, 53.2 MB/s 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:14.092 256+0 records in 00:08:14.092 256+0 records out 00:08:14.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212246 s, 49.4 MB/s 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.092 13:07:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.350 13:07:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:14.607 13:07:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:14.607 13:07:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:14.607 13:07:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:14.607 13:07:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.607 13:07:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.607 13:07:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:14.865 13:07:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:14.865 13:07:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.865 13:07:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:14.865 13:07:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.865 13:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:15.123 13:07:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:15.123 13:07:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:15.381 13:07:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:15.639 [2024-11-25 13:07:13.093972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.639 [2024-11-25 13:07:13.151996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.639 [2024-11-25 13:07:13.152001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.639 [2024-11-25 13:07:13.207136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:15.639 [2024-11-25 13:07:13.207194] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:18.919 13:07:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3052903 /var/tmp/spdk-nbd.sock 00:08:18.919 13:07:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3052903 ']' 00:08:18.919 13:07:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:18.919 13:07:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.919 13:07:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:18.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:18.919 13:07:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.919 13:07:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:18.919 13:07:16 event.app_repeat -- event/event.sh@39 -- # killprocess 3052903 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3052903 ']' 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3052903 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3052903 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3052903' 00:08:18.919 killing process with pid 3052903 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3052903 00:08:18.919 13:07:16 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3052903 00:08:18.919 spdk_app_start is called in Round 0. 00:08:18.919 Shutdown signal received, stop current app iteration 00:08:18.919 Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 reinitialization... 00:08:18.919 spdk_app_start is called in Round 1. 00:08:18.919 Shutdown signal received, stop current app iteration 00:08:18.919 Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 reinitialization... 00:08:18.919 spdk_app_start is called in Round 2. 00:08:18.919 Shutdown signal received, stop current app iteration 00:08:18.919 Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 reinitialization... 00:08:18.919 spdk_app_start is called in Round 3. 00:08:18.919 Shutdown signal received, stop current app iteration 00:08:18.919 13:07:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:18.919 13:07:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:18.919 00:08:18.919 real 0m18.777s 00:08:18.919 user 0m41.630s 00:08:18.919 sys 0m3.223s 00:08:18.920 13:07:16 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.920 13:07:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:18.920 ************************************ 00:08:18.920 END TEST app_repeat 00:08:18.920 ************************************ 00:08:18.920 13:07:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:18.920 13:07:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:18.920 13:07:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.920 13:07:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.920 13:07:16 event -- common/autotest_common.sh@10 -- # set +x 00:08:18.920 ************************************ 00:08:18.920 START TEST cpu_locks 00:08:18.920 ************************************ 00:08:18.920 13:07:16 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:18.920 * Looking for test storage... 00:08:18.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:18.920 13:07:16 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.920 13:07:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.920 13:07:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.920 13:07:16 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.920 13:07:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.179 13:07:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:19.179 13:07:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.179 13:07:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.179 --rc genhtml_branch_coverage=1 00:08:19.179 --rc genhtml_function_coverage=1 00:08:19.179 --rc genhtml_legend=1 00:08:19.179 --rc geninfo_all_blocks=1 00:08:19.179 --rc geninfo_unexecuted_blocks=1 00:08:19.179 00:08:19.179 ' 00:08:19.179 13:07:16 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.179 --rc genhtml_branch_coverage=1 00:08:19.179 --rc genhtml_function_coverage=1 00:08:19.179 --rc genhtml_legend=1 00:08:19.179 --rc geninfo_all_blocks=1 00:08:19.179 --rc geninfo_unexecuted_blocks=1 00:08:19.179 00:08:19.179 ' 00:08:19.179 13:07:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.179 --rc genhtml_branch_coverage=1 00:08:19.179 --rc genhtml_function_coverage=1 00:08:19.179 --rc genhtml_legend=1 00:08:19.179 --rc geninfo_all_blocks=1 00:08:19.179 --rc geninfo_unexecuted_blocks=1 00:08:19.179 00:08:19.179 ' 00:08:19.179 13:07:16 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.179 --rc genhtml_branch_coverage=1 00:08:19.179 --rc genhtml_function_coverage=1 00:08:19.179 --rc genhtml_legend=1 00:08:19.179 --rc geninfo_all_blocks=1 00:08:19.179 --rc geninfo_unexecuted_blocks=1 00:08:19.179 00:08:19.179 ' 00:08:19.179 13:07:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:19.179 13:07:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:19.179 13:07:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:19.179 13:07:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:19.179 13:07:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.179 13:07:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.179 13:07:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:19.179 ************************************ 00:08:19.179 START TEST default_locks 00:08:19.179 ************************************ 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3055393 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3055393 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3055393 ']' 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.179 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:19.179 [2024-11-25 13:07:16.666597] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:19.179 [2024-11-25 13:07:16.666703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055393 ] 00:08:19.179 [2024-11-25 13:07:16.730956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.179 [2024-11-25 13:07:16.790277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.437 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.437 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:19.437 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3055393 00:08:19.437 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3055393 00:08:19.437 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.695 lslocks: write error 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3055393 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3055393 ']' 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3055393 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055393 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055393' 00:08:19.695 killing process with pid 3055393 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3055393 00:08:19.695 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3055393 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3055393 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3055393 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3055393 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3055393 ']' 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3055393) - No such process 00:08:20.261 ERROR: process (pid: 3055393) is no longer running 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:20.261 00:08:20.261 real 0m1.163s 00:08:20.261 user 0m1.113s 00:08:20.261 sys 0m0.520s 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.261 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 ************************************ 00:08:20.261 END TEST default_locks 00:08:20.261 ************************************ 00:08:20.261 13:07:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:20.261 13:07:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.261 13:07:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.261 13:07:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 ************************************ 00:08:20.261 START TEST default_locks_via_rpc 00:08:20.261 ************************************ 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3055557 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3055557 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3055557 ']' 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.261 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.261 [2024-11-25 13:07:17.878094] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:20.261 [2024-11-25 13:07:17.878190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055557 ] 00:08:20.520 [2024-11-25 13:07:17.945851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.520 [2024-11-25 13:07:18.005959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3055557 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3055557 00:08:20.778 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3055557 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3055557 ']' 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3055557 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055557 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055557' 00:08:21.036 killing process with pid 3055557 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3055557 00:08:21.036 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3055557 00:08:21.601 00:08:21.601 real 0m1.171s 00:08:21.601 user 0m1.136s 00:08:21.601 sys 0m0.503s 00:08:21.601 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.601 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.601 ************************************ 00:08:21.601 END TEST default_locks_via_rpc 00:08:21.601 ************************************ 00:08:21.601 13:07:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:21.601 13:07:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.601 13:07:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.601 13:07:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.601 ************************************ 00:08:21.601 START TEST non_locking_app_on_locked_coremask 00:08:21.601 ************************************ 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3055723 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3055723 /var/tmp/spdk.sock 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3055723 ']' 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.601 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.601 [2024-11-25 13:07:19.100123] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:21.601 [2024-11-25 13:07:19.100232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055723 ] 00:08:21.601 [2024-11-25 13:07:19.166336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.601 [2024-11-25 13:07:19.220986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3055731 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3055731 /var/tmp/spdk2.sock 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3055731 ']' 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.859 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:22.117 [2024-11-25 13:07:19.541824] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:22.117 [2024-11-25 13:07:19.541913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3055731 ] 00:08:22.117 [2024-11-25 13:07:19.641082] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.117 [2024-11-25 13:07:19.641120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.117 [2024-11-25 13:07:19.753509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.049 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.049 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:23.049 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3055723 00:08:23.049 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3055723 00:08:23.049 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:23.614 lslocks: write error 00:08:23.614 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3055723 00:08:23.614 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3055723 ']' 00:08:23.614 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3055723 00:08:23.614 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:23.614 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.614 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055723 00:08:23.615 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.615 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.615 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055723' 00:08:23.615 killing process with pid 3055723 00:08:23.615 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3055723 00:08:23.615 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3055723 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3055731 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3055731 ']' 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3055731 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055731 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.547 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055731' 00:08:24.547 killing process with pid 3055731 00:08:24.548 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3055731 00:08:24.548 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3055731 00:08:24.805 00:08:24.805 real 0m3.321s 00:08:24.805 user 0m3.538s 00:08:24.805 sys 0m1.076s 00:08:24.805 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.805 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 ************************************ 00:08:24.805 END TEST non_locking_app_on_locked_coremask 00:08:24.806 ************************************ 00:08:24.806 13:07:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:24.806 13:07:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.806 13:07:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.806 13:07:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.806 ************************************ 00:08:24.806 START TEST locking_app_on_unlocked_coremask 00:08:24.806 ************************************ 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3056157 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3056157 /var/tmp/spdk.sock 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3056157 ']' 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.806 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.063 [2024-11-25 13:07:22.477735] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:25.063 [2024-11-25 13:07:22.477824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056157 ] 00:08:25.063 [2024-11-25 13:07:22.543377] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:25.063 [2024-11-25 13:07:22.543416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.063 [2024-11-25 13:07:22.603576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3056165 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3056165 /var/tmp/spdk2.sock 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3056165 ']' 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.321 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:25.321 [2024-11-25 13:07:22.921320] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:25.321 [2024-11-25 13:07:22.921409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056165 ] 00:08:25.579 [2024-11-25 13:07:23.019264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.579 [2024-11-25 13:07:23.131415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.513 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.513 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:26.513 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3056165 00:08:26.513 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3056165 00:08:26.513 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:26.816 lslocks: write error 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3056157 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3056157 ']' 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3056157 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056157 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056157' 00:08:26.816 killing process with pid 3056157 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3056157 00:08:26.816 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3056157 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3056165 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3056165 ']' 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3056165 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056165 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056165' 00:08:27.777 killing process with pid 3056165 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3056165 00:08:27.777 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3056165 00:08:28.036 00:08:28.036 real 0m3.174s 00:08:28.036 user 0m3.393s 00:08:28.036 sys 0m1.018s 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.036 ************************************ 00:08:28.036 END TEST locking_app_on_unlocked_coremask 00:08:28.036 ************************************ 00:08:28.036 13:07:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:28.036 13:07:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.036 13:07:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.036 13:07:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:28.036 ************************************ 00:08:28.036 START TEST locking_app_on_locked_coremask 00:08:28.036 ************************************ 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3056595 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3056595 /var/tmp/spdk.sock 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3056595 ']' 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.036 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.295 [2024-11-25 13:07:25.701408] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:28.295 [2024-11-25 13:07:25.701500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056595 ] 00:08:28.295 [2024-11-25 13:07:25.764780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.295 [2024-11-25 13:07:25.818083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3056604 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3056604 /var/tmp/spdk2.sock 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3056604 /var/tmp/spdk2.sock 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3056604 /var/tmp/spdk2.sock 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3056604 ']' 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:28.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.555 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:28.555 [2024-11-25 13:07:26.140362] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:28.555 [2024-11-25 13:07:26.140455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056604 ] 00:08:28.814 [2024-11-25 13:07:26.245184] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3056595 has claimed it. 00:08:28.814 [2024-11-25 13:07:26.245249] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:29.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3056604) - No such process 00:08:29.383 ERROR: process (pid: 3056604) is no longer running 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3056595 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3056595 00:08:29.383 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:29.641 lslocks: write error 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3056595 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3056595 ']' 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3056595 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056595 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056595' 00:08:29.641 killing process with pid 3056595 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3056595 00:08:29.641 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3056595 00:08:30.208 00:08:30.208 real 0m2.030s 00:08:30.208 user 0m2.265s 00:08:30.208 sys 0m0.621s 00:08:30.208 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.208 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:30.208 ************************************ 00:08:30.208 END TEST locking_app_on_locked_coremask 00:08:30.208 ************************************ 00:08:30.208 13:07:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:30.208 13:07:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.208 13:07:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.208 13:07:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:30.208 ************************************ 00:08:30.208 START TEST locking_overlapped_coremask 00:08:30.208 ************************************ 00:08:30.208 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:30.208 13:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3056893 00:08:30.208 13:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:30.208 13:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3056893 /var/tmp/spdk.sock 00:08:30.208 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3056893 ']' 00:08:30.208 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.209 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.209 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.209 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.209 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:30.209 [2024-11-25 13:07:27.783262] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:30.209 [2024-11-25 13:07:27.783380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056893 ] 00:08:30.209 [2024-11-25 13:07:27.847790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:30.468 [2024-11-25 13:07:27.910917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.468 [2024-11-25 13:07:27.910982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.468 [2024-11-25 13:07:27.910985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3056904 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3056904 /var/tmp/spdk2.sock 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3056904 /var/tmp/spdk2.sock 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3056904 /var/tmp/spdk2.sock 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3056904 ']' 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:30.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.728 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:30.728 [2024-11-25 13:07:28.248556] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:30.728 [2024-11-25 13:07:28.248653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056904 ] 00:08:30.728 [2024-11-25 13:07:28.352965] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3056893 has claimed it. 00:08:30.728 [2024-11-25 13:07:28.353020] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:31.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3056904) - No such process 00:08:31.665 ERROR: process (pid: 3056904) is no longer running 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3056893 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3056893 ']' 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3056893 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.665 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3056893 00:08:31.665 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.665 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.665 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3056893' 00:08:31.665 killing process with pid 3056893 00:08:31.665 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3056893 00:08:31.665 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3056893 00:08:31.923 00:08:31.923 real 0m1.710s 00:08:31.923 user 0m4.784s 00:08:31.923 sys 0m0.468s 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:31.923 ************************************ 00:08:31.923 END TEST locking_overlapped_coremask 00:08:31.923 ************************************ 00:08:31.923 13:07:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:31.923 13:07:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.923 13:07:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.923 13:07:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:31.923 ************************************ 00:08:31.923 START TEST locking_overlapped_coremask_via_rpc 00:08:31.923 ************************************ 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3057068 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3057068 /var/tmp/spdk.sock 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3057068 ']' 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.923 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.923 [2024-11-25 13:07:29.543446] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:31.923 [2024-11-25 13:07:29.543546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057068 ] 00:08:32.183 [2024-11-25 13:07:29.614520] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:32.183 [2024-11-25 13:07:29.614553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.183 [2024-11-25 13:07:29.674340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.183 [2024-11-25 13:07:29.674379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.183 [2024-11-25 13:07:29.674384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3057144 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3057144 /var/tmp/spdk2.sock 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3057144 ']' 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:32.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.442 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:32.442 [2024-11-25 13:07:29.999297] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:32.442 [2024-11-25 13:07:29.999409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057144 ] 00:08:32.701 [2024-11-25 13:07:30.108578] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:32.701 [2024-11-25 13:07:30.108640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.701 [2024-11-25 13:07:30.235119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.701 [2024-11-25 13:07:30.235186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:32.701 [2024-11-25 13:07:30.235189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.638 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.638 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:33.638 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:33.638 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.638 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.638 [2024-11-25 13:07:31.010396] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3057068 has claimed it. 00:08:33.638 request: 00:08:33.638 { 00:08:33.638 "method": "framework_enable_cpumask_locks", 00:08:33.638 "req_id": 1 00:08:33.638 } 00:08:33.638 Got JSON-RPC error response 00:08:33.638 response: 00:08:33.638 { 00:08:33.638 "code": -32603, 00:08:33.638 "message": "Failed to claim CPU core: 2" 00:08:33.638 } 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3057068 /var/tmp/spdk.sock 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3057068 ']' 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.638 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3057144 /var/tmp/spdk2.sock 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3057144 ']' 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:33.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.897 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.155 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.155 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:34.155 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:34.155 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:34.155 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:34.155 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:34.155 00:08:34.155 real 0m2.079s 00:08:34.155 user 0m1.133s 00:08:34.155 sys 0m0.202s 00:08:34.155 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.155 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.155 ************************************ 00:08:34.155 END TEST locking_overlapped_coremask_via_rpc 00:08:34.155 ************************************ 00:08:34.155 13:07:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:34.155 13:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3057068 ]] 00:08:34.155 13:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3057068 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3057068 ']' 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3057068 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057068 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057068' 00:08:34.155 killing process with pid 3057068 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3057068 00:08:34.155 13:07:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3057068 00:08:34.414 13:07:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3057144 ]] 00:08:34.414 13:07:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3057144 00:08:34.414 13:07:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3057144 ']' 00:08:34.414 13:07:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3057144 00:08:34.414 13:07:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:34.414 13:07:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.414 13:07:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057144 00:08:34.673 13:07:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:34.673 13:07:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:34.673 13:07:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057144' 00:08:34.673 killing process with pid 3057144 00:08:34.673 13:07:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3057144 00:08:34.673 13:07:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3057144 00:08:34.932 13:07:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:34.932 13:07:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:34.932 13:07:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3057068 ]] 00:08:34.932 13:07:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3057068 00:08:34.932 13:07:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3057068 ']' 00:08:34.932 13:07:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3057068 00:08:34.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3057068) - No such process 00:08:34.932 13:07:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3057068 is not found' 00:08:34.932 Process with pid 3057068 is not found 00:08:34.932 13:07:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3057144 ]] 00:08:34.932 13:07:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3057144 00:08:34.932 13:07:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3057144 ']' 00:08:34.932 13:07:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3057144 00:08:34.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3057144) - No such process 00:08:34.932 13:07:32 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3057144 is not found' 00:08:34.932 Process with pid 3057144 is not found 00:08:34.932 13:07:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:34.932 00:08:34.932 real 0m16.092s 00:08:34.932 user 0m29.142s 00:08:34.932 sys 0m5.395s 00:08:34.932 13:07:32 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.932 13:07:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:34.932 ************************************ 00:08:34.932 END TEST cpu_locks 00:08:34.932 ************************************ 00:08:34.932 00:08:34.932 real 0m40.752s 00:08:34.932 user 1m19.802s 00:08:34.932 sys 0m9.463s 00:08:34.932 13:07:32 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.932 13:07:32 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.932 ************************************ 00:08:34.932 END TEST event 00:08:34.932 ************************************ 00:08:34.932 13:07:32 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:34.932 13:07:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.932 13:07:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.932 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:08:35.192 ************************************ 00:08:35.192 START TEST thread 00:08:35.192 ************************************ 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:35.192 * Looking for test storage... 00:08:35.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.192 13:07:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.192 13:07:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.192 13:07:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.192 13:07:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.192 13:07:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.192 13:07:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.192 13:07:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.192 13:07:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.192 13:07:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.192 13:07:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.192 13:07:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.192 13:07:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:35.192 13:07:32 thread -- scripts/common.sh@345 -- # : 1 00:08:35.192 13:07:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.192 13:07:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.192 13:07:32 thread -- scripts/common.sh@365 -- # decimal 1 00:08:35.192 13:07:32 thread -- scripts/common.sh@353 -- # local d=1 00:08:35.192 13:07:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.192 13:07:32 thread -- scripts/common.sh@355 -- # echo 1 00:08:35.192 13:07:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.192 13:07:32 thread -- scripts/common.sh@366 -- # decimal 2 00:08:35.192 13:07:32 thread -- scripts/common.sh@353 -- # local d=2 00:08:35.192 13:07:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.192 13:07:32 thread -- scripts/common.sh@355 -- # echo 2 00:08:35.192 13:07:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.192 13:07:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.192 13:07:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.192 13:07:32 thread -- scripts/common.sh@368 -- # return 0 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.192 --rc genhtml_branch_coverage=1 00:08:35.192 --rc genhtml_function_coverage=1 00:08:35.192 --rc genhtml_legend=1 00:08:35.192 --rc geninfo_all_blocks=1 00:08:35.192 --rc geninfo_unexecuted_blocks=1 00:08:35.192 00:08:35.192 ' 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.192 --rc genhtml_branch_coverage=1 00:08:35.192 --rc genhtml_function_coverage=1 00:08:35.192 --rc genhtml_legend=1 00:08:35.192 --rc geninfo_all_blocks=1 00:08:35.192 --rc geninfo_unexecuted_blocks=1 00:08:35.192 00:08:35.192 ' 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.192 --rc genhtml_branch_coverage=1 00:08:35.192 --rc genhtml_function_coverage=1 00:08:35.192 --rc genhtml_legend=1 00:08:35.192 --rc geninfo_all_blocks=1 00:08:35.192 --rc geninfo_unexecuted_blocks=1 00:08:35.192 00:08:35.192 ' 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.192 --rc genhtml_branch_coverage=1 00:08:35.192 --rc genhtml_function_coverage=1 00:08:35.192 --rc genhtml_legend=1 00:08:35.192 --rc geninfo_all_blocks=1 00:08:35.192 --rc geninfo_unexecuted_blocks=1 00:08:35.192 00:08:35.192 ' 00:08:35.192 13:07:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.192 13:07:32 thread -- common/autotest_common.sh@10 -- # set +x 00:08:35.192 ************************************ 00:08:35.192 START TEST thread_poller_perf 00:08:35.192 ************************************ 00:08:35.192 13:07:32 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:35.192 [2024-11-25 13:07:32.795953] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:35.192 [2024-11-25 13:07:32.796021] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057577 ] 00:08:35.450 [2024-11-25 13:07:32.864193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.450 [2024-11-25 13:07:32.922613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.450 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:36.385 [2024-11-25T12:07:34.044Z] ====================================== 00:08:36.385 [2024-11-25T12:07:34.044Z] busy:2713196310 (cyc) 00:08:36.385 [2024-11-25T12:07:34.044Z] total_run_count: 368000 00:08:36.385 [2024-11-25T12:07:34.044Z] tsc_hz: 2700000000 (cyc) 00:08:36.385 [2024-11-25T12:07:34.044Z] ====================================== 00:08:36.385 [2024-11-25T12:07:34.044Z] poller_cost: 7372 (cyc), 2730 (nsec) 00:08:36.385 00:08:36.385 real 0m1.212s 00:08:36.385 user 0m1.139s 00:08:36.385 sys 0m0.068s 00:08:36.385 13:07:33 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.385 13:07:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:36.385 ************************************ 00:08:36.385 END TEST thread_poller_perf 00:08:36.385 ************************************ 00:08:36.385 13:07:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:36.385 13:07:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:36.385 13:07:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.385 13:07:34 thread -- common/autotest_common.sh@10 -- # set +x 00:08:36.385 ************************************ 00:08:36.385 START TEST thread_poller_perf 00:08:36.385 ************************************ 00:08:36.385 13:07:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:36.644 [2024-11-25 13:07:34.056506] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:36.644 [2024-11-25 13:07:34.056569] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057730 ] 00:08:36.644 [2024-11-25 13:07:34.122322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.644 [2024-11-25 13:07:34.178739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.644 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:38.017 [2024-11-25T12:07:35.676Z] ====================================== 00:08:38.017 [2024-11-25T12:07:35.676Z] busy:2702166582 (cyc) 00:08:38.017 [2024-11-25T12:07:35.676Z] total_run_count: 4874000 00:08:38.017 [2024-11-25T12:07:35.676Z] tsc_hz: 2700000000 (cyc) 00:08:38.017 [2024-11-25T12:07:35.676Z] ====================================== 00:08:38.017 [2024-11-25T12:07:35.676Z] poller_cost: 554 (cyc), 205 (nsec) 00:08:38.017 00:08:38.017 real 0m1.199s 00:08:38.017 user 0m1.129s 00:08:38.017 sys 0m0.063s 00:08:38.017 13:07:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.017 13:07:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:38.017 ************************************ 00:08:38.017 END TEST thread_poller_perf 00:08:38.017 ************************************ 00:08:38.017 13:07:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:38.017 00:08:38.017 real 0m2.660s 00:08:38.017 user 0m2.404s 00:08:38.017 sys 0m0.259s 00:08:38.017 13:07:35 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.017 13:07:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:38.017 ************************************ 00:08:38.017 END TEST thread 00:08:38.017 ************************************ 00:08:38.017 13:07:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:38.017 13:07:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.017 13:07:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.017 13:07:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.017 13:07:35 -- common/autotest_common.sh@10 -- # set +x 00:08:38.017 ************************************ 00:08:38.017 START TEST app_cmdline 00:08:38.017 ************************************ 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:38.017 * Looking for test storage... 00:08:38.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.017 13:07:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.017 --rc genhtml_branch_coverage=1 00:08:38.017 --rc genhtml_function_coverage=1 00:08:38.017 --rc genhtml_legend=1 00:08:38.017 --rc geninfo_all_blocks=1 00:08:38.017 --rc geninfo_unexecuted_blocks=1 00:08:38.017 00:08:38.017 ' 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.017 --rc genhtml_branch_coverage=1 00:08:38.017 --rc genhtml_function_coverage=1 00:08:38.017 --rc genhtml_legend=1 00:08:38.017 --rc geninfo_all_blocks=1 00:08:38.017 --rc geninfo_unexecuted_blocks=1 00:08:38.017 00:08:38.017 ' 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.017 --rc genhtml_branch_coverage=1 00:08:38.017 --rc genhtml_function_coverage=1 00:08:38.017 --rc genhtml_legend=1 00:08:38.017 --rc geninfo_all_blocks=1 00:08:38.017 --rc geninfo_unexecuted_blocks=1 00:08:38.017 00:08:38.017 ' 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.017 --rc genhtml_branch_coverage=1 00:08:38.017 --rc genhtml_function_coverage=1 00:08:38.017 --rc genhtml_legend=1 00:08:38.017 --rc geninfo_all_blocks=1 00:08:38.017 --rc geninfo_unexecuted_blocks=1 00:08:38.017 00:08:38.017 ' 00:08:38.017 13:07:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:38.017 13:07:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3057945 00:08:38.017 13:07:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:38.017 13:07:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3057945 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3057945 ']' 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.017 13:07:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.017 [2024-11-25 13:07:35.496158] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:38.017 [2024-11-25 13:07:35.496248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057945 ] 00:08:38.017 [2024-11-25 13:07:35.561896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.017 [2024-11-25 13:07:35.620997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.274 13:07:35 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.274 13:07:35 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:38.274 13:07:35 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:38.530 { 00:08:38.530 "version": "SPDK v25.01-pre git sha1 9b3991571", 00:08:38.530 "fields": { 00:08:38.530 "major": 25, 00:08:38.530 "minor": 1, 00:08:38.530 "patch": 0, 00:08:38.530 "suffix": "-pre", 00:08:38.530 "commit": "9b3991571" 00:08:38.530 } 00:08:38.530 } 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:38.530 13:07:36 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.530 13:07:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:38.530 13:07:36 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:38.530 13:07:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:38.530 13:07:36 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:38.530 13:07:36 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:38.530 13:07:36 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.788 13:07:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.788 13:07:36 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.788 13:07:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.788 13:07:36 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.788 13:07:36 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.788 13:07:36 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.788 13:07:36 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:38.788 13:07:36 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:38.788 request: 00:08:38.788 { 00:08:38.788 "method": "env_dpdk_get_mem_stats", 00:08:38.788 "req_id": 1 00:08:38.788 } 00:08:38.788 Got JSON-RPC error response 00:08:38.788 response: 00:08:38.788 { 00:08:38.788 "code": -32601, 00:08:38.788 "message": "Method not found" 00:08:38.788 } 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:39.045 13:07:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3057945 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3057945 ']' 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3057945 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057945 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057945' 00:08:39.045 killing process with pid 3057945 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@973 -- # kill 3057945 00:08:39.045 13:07:36 app_cmdline -- common/autotest_common.sh@978 -- # wait 3057945 00:08:39.303 00:08:39.303 real 0m1.604s 00:08:39.303 user 0m1.989s 00:08:39.303 sys 0m0.477s 00:08:39.303 13:07:36 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.303 13:07:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:39.303 ************************************ 00:08:39.303 END TEST app_cmdline 00:08:39.303 ************************************ 00:08:39.303 13:07:36 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:39.303 13:07:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.303 13:07:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.303 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:08:39.561 ************************************ 00:08:39.561 START TEST version 00:08:39.561 ************************************ 00:08:39.561 13:07:36 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:39.561 * Looking for test storage... 00:08:39.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.561 13:07:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.561 13:07:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.561 13:07:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.561 13:07:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.561 13:07:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.561 13:07:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.561 13:07:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.561 13:07:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.561 13:07:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.561 13:07:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.561 13:07:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.561 13:07:37 version -- scripts/common.sh@344 -- # case "$op" in 00:08:39.561 13:07:37 version -- scripts/common.sh@345 -- # : 1 00:08:39.561 13:07:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.561 13:07:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.561 13:07:37 version -- scripts/common.sh@365 -- # decimal 1 00:08:39.561 13:07:37 version -- scripts/common.sh@353 -- # local d=1 00:08:39.561 13:07:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.561 13:07:37 version -- scripts/common.sh@355 -- # echo 1 00:08:39.561 13:07:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.561 13:07:37 version -- scripts/common.sh@366 -- # decimal 2 00:08:39.561 13:07:37 version -- scripts/common.sh@353 -- # local d=2 00:08:39.561 13:07:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.561 13:07:37 version -- scripts/common.sh@355 -- # echo 2 00:08:39.561 13:07:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.561 13:07:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.561 13:07:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.561 13:07:37 version -- scripts/common.sh@368 -- # return 0 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.561 --rc genhtml_branch_coverage=1 00:08:39.561 --rc genhtml_function_coverage=1 00:08:39.561 --rc genhtml_legend=1 00:08:39.561 --rc geninfo_all_blocks=1 00:08:39.561 --rc geninfo_unexecuted_blocks=1 00:08:39.561 00:08:39.561 ' 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.561 --rc genhtml_branch_coverage=1 00:08:39.561 --rc genhtml_function_coverage=1 00:08:39.561 --rc genhtml_legend=1 00:08:39.561 --rc geninfo_all_blocks=1 00:08:39.561 --rc geninfo_unexecuted_blocks=1 00:08:39.561 00:08:39.561 ' 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.561 --rc genhtml_branch_coverage=1 00:08:39.561 --rc genhtml_function_coverage=1 00:08:39.561 --rc genhtml_legend=1 00:08:39.561 --rc geninfo_all_blocks=1 00:08:39.561 --rc geninfo_unexecuted_blocks=1 00:08:39.561 00:08:39.561 ' 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.561 --rc genhtml_branch_coverage=1 00:08:39.561 --rc genhtml_function_coverage=1 00:08:39.561 --rc genhtml_legend=1 00:08:39.561 --rc geninfo_all_blocks=1 00:08:39.561 --rc geninfo_unexecuted_blocks=1 00:08:39.561 00:08:39.561 ' 00:08:39.561 13:07:37 version -- app/version.sh@17 -- # get_header_version major 00:08:39.561 13:07:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:39.561 13:07:37 version -- app/version.sh@14 -- # cut -f2 00:08:39.561 13:07:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.561 13:07:37 version -- app/version.sh@17 -- # major=25 00:08:39.561 13:07:37 version -- app/version.sh@18 -- # get_header_version minor 00:08:39.561 13:07:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:39.561 13:07:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.561 13:07:37 version -- app/version.sh@14 -- # cut -f2 00:08:39.561 13:07:37 version -- app/version.sh@18 -- # minor=1 00:08:39.561 13:07:37 version -- app/version.sh@19 -- # get_header_version patch 00:08:39.561 13:07:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:39.561 13:07:37 version -- app/version.sh@14 -- # cut -f2 00:08:39.561 13:07:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.561 13:07:37 version -- app/version.sh@19 -- # patch=0 00:08:39.561 13:07:37 version -- app/version.sh@20 -- # get_header_version suffix 00:08:39.561 13:07:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:39.561 13:07:37 version -- app/version.sh@14 -- # cut -f2 00:08:39.561 13:07:37 version -- app/version.sh@14 -- # tr -d '"' 00:08:39.561 13:07:37 version -- app/version.sh@20 -- # suffix=-pre 00:08:39.561 13:07:37 version -- app/version.sh@22 -- # version=25.1 00:08:39.561 13:07:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:39.561 13:07:37 version -- app/version.sh@28 -- # version=25.1rc0 00:08:39.561 13:07:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:39.561 13:07:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:39.561 13:07:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:39.561 13:07:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:39.561 00:08:39.561 real 0m0.199s 00:08:39.561 user 0m0.129s 00:08:39.561 sys 0m0.096s 00:08:39.561 13:07:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.561 13:07:37 version -- common/autotest_common.sh@10 -- # set +x 00:08:39.561 ************************************ 00:08:39.561 END TEST version 00:08:39.561 ************************************ 00:08:39.561 13:07:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:39.561 13:07:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:39.561 13:07:37 -- spdk/autotest.sh@194 -- # uname -s 00:08:39.561 13:07:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:39.561 13:07:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:39.561 13:07:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:39.561 13:07:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:39.561 13:07:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:39.561 13:07:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:39.561 13:07:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.561 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.561 13:07:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:39.561 13:07:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:39.561 13:07:37 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:39.561 13:07:37 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:39.561 13:07:37 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:39.561 13:07:37 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:39.561 13:07:37 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:39.561 13:07:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.561 13:07:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.561 13:07:37 -- common/autotest_common.sh@10 -- # set +x 00:08:39.819 ************************************ 00:08:39.819 START TEST nvmf_tcp 00:08:39.819 ************************************ 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:39.819 * Looking for test storage... 00:08:39.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.819 13:07:37 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.819 --rc genhtml_branch_coverage=1 00:08:39.819 --rc genhtml_function_coverage=1 00:08:39.819 --rc genhtml_legend=1 00:08:39.819 --rc geninfo_all_blocks=1 00:08:39.819 --rc geninfo_unexecuted_blocks=1 00:08:39.819 00:08:39.819 ' 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.819 --rc genhtml_branch_coverage=1 00:08:39.819 --rc genhtml_function_coverage=1 00:08:39.819 --rc genhtml_legend=1 00:08:39.819 --rc geninfo_all_blocks=1 00:08:39.819 --rc geninfo_unexecuted_blocks=1 00:08:39.819 00:08:39.819 ' 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.819 --rc genhtml_branch_coverage=1 00:08:39.819 --rc genhtml_function_coverage=1 00:08:39.819 --rc genhtml_legend=1 00:08:39.819 --rc geninfo_all_blocks=1 00:08:39.819 --rc geninfo_unexecuted_blocks=1 00:08:39.819 00:08:39.819 ' 00:08:39.819 13:07:37 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.819 --rc genhtml_branch_coverage=1 00:08:39.819 --rc genhtml_function_coverage=1 00:08:39.819 --rc genhtml_legend=1 00:08:39.819 --rc geninfo_all_blocks=1 00:08:39.819 --rc geninfo_unexecuted_blocks=1 00:08:39.819 00:08:39.819 ' 00:08:39.819 13:07:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:39.819 13:07:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:39.820 13:07:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:39.820 13:07:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.820 13:07:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.820 13:07:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:39.820 ************************************ 00:08:39.820 START TEST nvmf_target_core 00:08:39.820 ************************************ 00:08:39.820 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:39.820 * Looking for test storage... 00:08:39.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:39.820 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.820 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.820 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.078 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.079 --rc genhtml_branch_coverage=1 00:08:40.079 --rc genhtml_function_coverage=1 00:08:40.079 --rc genhtml_legend=1 00:08:40.079 --rc geninfo_all_blocks=1 00:08:40.079 --rc geninfo_unexecuted_blocks=1 00:08:40.079 00:08:40.079 ' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.079 --rc genhtml_branch_coverage=1 00:08:40.079 --rc genhtml_function_coverage=1 00:08:40.079 --rc genhtml_legend=1 00:08:40.079 --rc geninfo_all_blocks=1 00:08:40.079 --rc geninfo_unexecuted_blocks=1 00:08:40.079 00:08:40.079 ' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.079 --rc genhtml_branch_coverage=1 00:08:40.079 --rc genhtml_function_coverage=1 00:08:40.079 --rc genhtml_legend=1 00:08:40.079 --rc geninfo_all_blocks=1 00:08:40.079 --rc geninfo_unexecuted_blocks=1 00:08:40.079 00:08:40.079 ' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.079 --rc genhtml_branch_coverage=1 00:08:40.079 --rc genhtml_function_coverage=1 00:08:40.079 --rc genhtml_legend=1 00:08:40.079 --rc geninfo_all_blocks=1 00:08:40.079 --rc geninfo_unexecuted_blocks=1 00:08:40.079 00:08:40.079 ' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:40.079 13:07:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.080 ************************************ 00:08:40.080 START TEST nvmf_abort 00:08:40.080 ************************************ 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:40.080 * Looking for test storage... 00:08:40.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.080 --rc genhtml_branch_coverage=1 00:08:40.080 --rc genhtml_function_coverage=1 00:08:40.080 --rc genhtml_legend=1 00:08:40.080 --rc geninfo_all_blocks=1 00:08:40.080 --rc geninfo_unexecuted_blocks=1 00:08:40.080 00:08:40.080 ' 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.080 --rc genhtml_branch_coverage=1 00:08:40.080 --rc genhtml_function_coverage=1 00:08:40.080 --rc genhtml_legend=1 00:08:40.080 --rc geninfo_all_blocks=1 00:08:40.080 --rc geninfo_unexecuted_blocks=1 00:08:40.080 00:08:40.080 ' 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.080 --rc genhtml_branch_coverage=1 00:08:40.080 --rc genhtml_function_coverage=1 00:08:40.080 --rc genhtml_legend=1 00:08:40.080 --rc geninfo_all_blocks=1 00:08:40.080 --rc geninfo_unexecuted_blocks=1 00:08:40.080 00:08:40.080 ' 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.080 --rc genhtml_branch_coverage=1 00:08:40.080 --rc genhtml_function_coverage=1 00:08:40.080 --rc genhtml_legend=1 00:08:40.080 --rc geninfo_all_blocks=1 00:08:40.080 --rc geninfo_unexecuted_blocks=1 00:08:40.080 00:08:40.080 ' 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.080 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.340 13:07:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.265 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.265 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.265 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.265 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.265 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.265 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.265 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:42.266 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:42.266 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:42.266 Found net devices under 0000:09:00.0: cvl_0_0 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:42.266 Found net devices under 0000:09:00.1: cvl_0_1 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.266 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.525 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.525 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.525 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.525 13:07:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:08:42.525 00:08:42.525 --- 10.0.0.2 ping statistics --- 00:08:42.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.525 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:08:42.525 00:08:42.525 --- 10.0.0.1 ping statistics --- 00:08:42.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.525 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.525 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3060036 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3060036 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3060036 ']' 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.526 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.526 [2024-11-25 13:07:40.113384] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:42.526 [2024-11-25 13:07:40.113467] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.785 [2024-11-25 13:07:40.190624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.785 [2024-11-25 13:07:40.252343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.785 [2024-11-25 13:07:40.252397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.785 [2024-11-25 13:07:40.252424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.785 [2024-11-25 13:07:40.252436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.785 [2024-11-25 13:07:40.252446] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.785 [2024-11-25 13:07:40.254008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.785 [2024-11-25 13:07:40.254069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.785 [2024-11-25 13:07:40.254074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.785 [2024-11-25 13:07:40.409560] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.785 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.044 Malloc0 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.044 Delay0 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.044 [2024-11-25 13:07:40.484407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.044 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.045 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.045 13:07:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:43.045 [2024-11-25 13:07:40.599053] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:45.575 Initializing NVMe Controllers 00:08:45.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:45.575 controller IO queue size 128 less than required 00:08:45.575 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:45.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:45.575 Initialization complete. Launching workers. 00:08:45.575 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28940 00:08:45.575 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29001, failed to submit 62 00:08:45.575 success 28944, unsuccessful 57, failed 0 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:45.575 rmmod nvme_tcp 00:08:45.575 rmmod nvme_fabrics 00:08:45.575 rmmod nvme_keyring 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3060036 ']' 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3060036 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3060036 ']' 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3060036 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060036 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060036' 00:08:45.575 killing process with pid 3060036 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3060036 00:08:45.575 13:07:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3060036 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.575 13:07:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.480 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:47.480 00:08:47.480 real 0m7.530s 00:08:47.480 user 0m10.945s 00:08:47.480 sys 0m2.631s 00:08:47.480 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.480 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.480 ************************************ 00:08:47.480 END TEST nvmf_abort 00:08:47.480 ************************************ 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.739 ************************************ 00:08:47.739 START TEST nvmf_ns_hotplug_stress 00:08:47.739 ************************************ 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:47.739 * Looking for test storage... 00:08:47.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.739 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.740 --rc genhtml_branch_coverage=1 00:08:47.740 --rc genhtml_function_coverage=1 00:08:47.740 --rc genhtml_legend=1 00:08:47.740 --rc geninfo_all_blocks=1 00:08:47.740 --rc geninfo_unexecuted_blocks=1 00:08:47.740 00:08:47.740 ' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.740 --rc genhtml_branch_coverage=1 00:08:47.740 --rc genhtml_function_coverage=1 00:08:47.740 --rc genhtml_legend=1 00:08:47.740 --rc geninfo_all_blocks=1 00:08:47.740 --rc geninfo_unexecuted_blocks=1 00:08:47.740 00:08:47.740 ' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.740 --rc genhtml_branch_coverage=1 00:08:47.740 --rc genhtml_function_coverage=1 00:08:47.740 --rc genhtml_legend=1 00:08:47.740 --rc geninfo_all_blocks=1 00:08:47.740 --rc geninfo_unexecuted_blocks=1 00:08:47.740 00:08:47.740 ' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:47.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.740 --rc genhtml_branch_coverage=1 00:08:47.740 --rc genhtml_function_coverage=1 00:08:47.740 --rc genhtml_legend=1 00:08:47.740 --rc geninfo_all_blocks=1 00:08:47.740 --rc geninfo_unexecuted_blocks=1 00:08:47.740 00:08:47.740 ' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:47.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:47.740 13:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:50.324 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:50.324 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:50.324 Found net devices under 0000:09:00.0: cvl_0_0 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.324 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:50.325 Found net devices under 0000:09:00.1: cvl_0_1 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:50.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:08:50.325 00:08:50.325 --- 10.0.0.2 ping statistics --- 00:08:50.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.325 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:08:50.325 00:08:50.325 --- 10.0.0.1 ping statistics --- 00:08:50.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.325 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3062390 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3062390 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3062390 ']' 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.325 13:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.325 [2024-11-25 13:07:47.773915] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:08:50.325 [2024-11-25 13:07:47.774014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.325 [2024-11-25 13:07:47.845541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.325 [2024-11-25 13:07:47.902684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.325 [2024-11-25 13:07:47.902738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.325 [2024-11-25 13:07:47.902766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.325 [2024-11-25 13:07:47.902777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.325 [2024-11-25 13:07:47.902786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.325 [2024-11-25 13:07:47.904240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.325 [2024-11-25 13:07:47.904312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.325 [2024-11-25 13:07:47.904314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.584 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.584 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:50.584 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:50.584 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:50.584 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.584 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.584 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:50.584 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.841 [2024-11-25 13:07:48.304628] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.841 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:51.100 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.357 [2024-11-25 13:07:48.839439] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.357 13:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.613 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:51.871 Malloc0 00:08:51.871 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:52.129 Delay0 00:08:52.129 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.387 13:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:52.643 NULL1 00:08:52.643 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:52.902 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3062809 00:08:52.902 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:52.902 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:08:52.902 13:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.274 Read completed with error (sct=0, sc=11) 00:08:54.274 13:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.532 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:54.532 13:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:54.532 13:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:54.789 true 00:08:54.789 13:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:08:54.789 13:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.722 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:55.979 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:55.979 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:56.237 true 00:08:56.237 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:08:56.237 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.494 13:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.752 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:56.752 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:57.009 true 00:08:57.009 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:08:57.009 13:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.942 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:58.200 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:58.200 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:58.457 true 00:08:58.457 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:08:58.457 13:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.715 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.971 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:58.971 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:59.228 true 00:08:59.228 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:08:59.228 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.485 13:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.742 13:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:59.742 13:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:59.999 true 00:08:59.999 13:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:08:59.999 13:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.080 13:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.080 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.338 13:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:01.338 13:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:01.596 true 00:09:01.596 13:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:01.596 13:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.854 13:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.112 13:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:02.112 13:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:02.370 true 00:09:02.370 13:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:02.370 13:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.302 13:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.302 13:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:03.302 13:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:03.560 true 00:09:03.560 13:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:03.560 13:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.818 13:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.076 13:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:04.076 13:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:04.333 true 00:09:04.333 13:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:04.333 13:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.591 13:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.849 13:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:04.849 13:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:05.106 true 00:09:05.364 13:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:05.364 13:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.296 13:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.296 13:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:06.296 13:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:06.553 true 00:09:06.553 13:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:06.553 13:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.116 13:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.116 13:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:07.116 13:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:07.373 true 00:09:07.630 13:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:07.630 13:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.887 13:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.145 13:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:08.145 13:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:08.402 true 00:09:08.402 13:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:08.402 13:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.335 13:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.592 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:09.593 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:09.850 true 00:09:09.850 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:09.850 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.107 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.364 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:10.364 13:08:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:10.622 true 00:09:10.622 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:10.622 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.880 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.137 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:11.137 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:11.395 true 00:09:11.395 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:11.395 13:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.327 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.327 13:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.585 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:12.585 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:13.151 true 00:09:13.151 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:13.151 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.151 13:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.408 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:13.409 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:13.667 true 00:09:13.925 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:13.925 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.183 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.439 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:14.439 13:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:14.697 true 00:09:14.697 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:14.697 13:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:15.629 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.888 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:15.888 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:16.145 true 00:09:16.145 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:16.145 13:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.402 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.964 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:16.964 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:16.964 true 00:09:16.964 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:16.964 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.221 13:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.477 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:17.477 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:17.735 true 00:09:17.992 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:17.992 13:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.921 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.178 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:19.178 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:19.436 true 00:09:19.436 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:19.436 13:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.694 13:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.951 13:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:19.951 13:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:20.210 true 00:09:20.210 13:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:20.210 13:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.468 13:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.726 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:20.726 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:20.984 true 00:09:20.984 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:20.984 13:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.917 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.174 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:22.174 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:22.432 true 00:09:22.432 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:22.432 13:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.690 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.947 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:22.947 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:23.205 true 00:09:23.205 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:23.205 13:08:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.463 Initializing NVMe Controllers 00:09:23.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:23.464 Controller IO queue size 128, less than required. 00:09:23.464 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:23.464 Controller IO queue size 128, less than required. 00:09:23.464 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:23.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:23.464 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:23.464 Initialization complete. Launching workers. 00:09:23.464 ======================================================== 00:09:23.464 Latency(us) 00:09:23.464 Device Information : IOPS MiB/s Average min max 00:09:23.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 672.20 0.33 78671.42 3090.49 1012775.75 00:09:23.464 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8705.02 4.25 14705.41 3224.99 535403.51 00:09:23.464 ======================================================== 00:09:23.464 Total : 9377.22 4.58 19290.77 3090.49 1012775.75 00:09:23.464 00:09:23.464 13:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.722 13:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:23.722 13:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:23.981 true 00:09:23.981 13:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3062809 00:09:23.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3062809) - No such process 00:09:23.981 13:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3062809 00:09:23.981 13:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.241 13:08:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:24.499 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:24.499 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:24.499 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:24.499 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.499 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:24.757 null0 00:09:24.757 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:24.757 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:24.757 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:25.015 null1 00:09:25.015 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:25.015 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:25.015 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:25.273 null2 00:09:25.273 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:25.273 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:25.273 13:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:25.532 null3 00:09:25.532 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:25.532 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:25.532 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:25.791 null4 00:09:25.791 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:25.791 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:25.791 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:26.050 null5 00:09:26.050 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:26.050 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:26.050 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:26.310 null6 00:09:26.310 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:26.310 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:26.310 13:08:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:26.570 null7 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:26.570 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:26.829 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3066880 3066881 3066883 3066885 3066887 3066889 3066891 3066893 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:26.830 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:27.088 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:27.088 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:27.088 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:27.088 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:27.088 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:27.088 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.088 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:27.088 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.347 13:08:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:27.606 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:27.606 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:27.606 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:27.606 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:27.606 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:27.606 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.606 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:27.606 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:27.865 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:28.123 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:28.123 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:28.123 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:28.123 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:28.124 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.124 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.124 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:28.124 13:08:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:28.690 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.690 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.690 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:28.691 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:28.949 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:28.949 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:28.949 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.949 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:28.949 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:28.949 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.207 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.208 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:29.466 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:29.466 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:29.466 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:29.466 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:29.466 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.466 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:29.466 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.466 13:08:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.724 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:29.982 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:29.982 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:29.982 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:29.982 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:29.982 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.982 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.982 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:29.982 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:30.240 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.240 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.240 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:30.240 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.240 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.241 13:08:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:30.500 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:30.758 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:30.758 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:30.758 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:30.758 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.758 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:30.758 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.758 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.016 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:31.274 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:31.274 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:31.274 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.274 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:31.274 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:31.274 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:31.274 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:31.274 13:08:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.533 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:31.792 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.792 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:31.792 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:31.792 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:31.792 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:31.792 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:31.792 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:31.792 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.050 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.051 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:32.309 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.309 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.567 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:32.567 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:32.567 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:32.567 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:32.567 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:32.567 13:08:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.825 rmmod nvme_tcp 00:09:32.825 rmmod nvme_fabrics 00:09:32.825 rmmod nvme_keyring 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3062390 ']' 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3062390 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3062390 ']' 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3062390 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3062390 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3062390' 00:09:32.825 killing process with pid 3062390 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3062390 00:09:32.825 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3062390 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.083 13:08:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.625 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.625 00:09:35.625 real 0m47.485s 00:09:35.625 user 3m40.506s 00:09:35.625 sys 0m16.088s 00:09:35.625 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.626 ************************************ 00:09:35.626 END TEST nvmf_ns_hotplug_stress 00:09:35.626 ************************************ 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.626 ************************************ 00:09:35.626 START TEST nvmf_delete_subsystem 00:09:35.626 ************************************ 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:35.626 * Looking for test storage... 00:09:35.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:35.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.626 --rc genhtml_branch_coverage=1 00:09:35.626 --rc genhtml_function_coverage=1 00:09:35.626 --rc genhtml_legend=1 00:09:35.626 --rc geninfo_all_blocks=1 00:09:35.626 --rc geninfo_unexecuted_blocks=1 00:09:35.626 00:09:35.626 ' 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:35.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.626 --rc genhtml_branch_coverage=1 00:09:35.626 --rc genhtml_function_coverage=1 00:09:35.626 --rc genhtml_legend=1 00:09:35.626 --rc geninfo_all_blocks=1 00:09:35.626 --rc geninfo_unexecuted_blocks=1 00:09:35.626 00:09:35.626 ' 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:35.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.626 --rc genhtml_branch_coverage=1 00:09:35.626 --rc genhtml_function_coverage=1 00:09:35.626 --rc genhtml_legend=1 00:09:35.626 --rc geninfo_all_blocks=1 00:09:35.626 --rc geninfo_unexecuted_blocks=1 00:09:35.626 00:09:35.626 ' 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:35.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.626 --rc genhtml_branch_coverage=1 00:09:35.626 --rc genhtml_function_coverage=1 00:09:35.626 --rc genhtml_legend=1 00:09:35.626 --rc geninfo_all_blocks=1 00:09:35.626 --rc geninfo_unexecuted_blocks=1 00:09:35.626 00:09:35.626 ' 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.626 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.627 13:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:37.546 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:37.546 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:37.546 Found net devices under 0000:09:00.0: cvl_0_0 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:37.546 Found net devices under 0000:09:00.1: cvl_0_1 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.546 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:37.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:09:37.547 00:09:37.547 --- 10.0.0.2 ping statistics --- 00:09:37.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.547 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:09:37.547 00:09:37.547 --- 10.0.0.1 ping statistics --- 00:09:37.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.547 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.547 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3069788 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3069788 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3069788 ']' 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.873 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.874 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.874 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.874 [2024-11-25 13:08:35.264088] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:09:37.874 [2024-11-25 13:08:35.264180] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.874 [2024-11-25 13:08:35.333951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.874 [2024-11-25 13:08:35.387385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.874 [2024-11-25 13:08:35.387443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.874 [2024-11-25 13:08:35.387471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.874 [2024-11-25 13:08:35.387482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.874 [2024-11-25 13:08:35.387491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.874 [2024-11-25 13:08:35.388906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.874 [2024-11-25 13:08:35.388911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.170 [2024-11-25 13:08:35.535491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.170 [2024-11-25 13:08:35.551950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.170 NULL1 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.170 Delay0 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3069812 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:38.170 13:08:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:38.170 [2024-11-25 13:08:35.636544] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:40.070 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.070 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.070 13:08:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 [2024-11-25 13:08:37.767809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e84a0 is same with the state(6) to be set 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.329 Write completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 Read completed with error (sct=0, sc=8) 00:09:40.329 starting I/O failed: -6 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 starting I/O failed: -6 00:09:40.330 [2024-11-25 13:08:37.768362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5618000c80 is same with the state(6) to be set 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Write completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.330 Read completed with error (sct=0, sc=8) 00:09:40.331 [2024-11-25 13:08:37.768933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8860 is same with the state(6) to be set 00:09:41.267 [2024-11-25 13:08:38.734162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e99a0 is same with the state(6) to be set 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 [2024-11-25 13:08:38.772350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e82c0 is same with the state(6) to be set 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 [2024-11-25 13:08:38.772631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8680 is same with the state(6) to be set 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 [2024-11-25 13:08:38.772819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f561800d840 is same with the state(6) to be set 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Write completed with error (sct=0, sc=8) 00:09:41.267 Read completed with error (sct=0, sc=8) 00:09:41.267 [2024-11-25 13:08:38.773248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f561800d060 is same with the state(6) to be set 00:09:41.267 Initializing NVMe Controllers 00:09:41.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:41.267 Controller IO queue size 128, less than required. 00:09:41.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:41.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:41.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:41.267 Initialization complete. Launching workers. 00:09:41.267 ======================================================== 00:09:41.267 Latency(us) 00:09:41.267 Device Information : IOPS MiB/s Average min max 00:09:41.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.70 0.08 903263.76 928.24 2003137.08 00:09:41.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.74 0.08 924724.83 595.03 2004896.21 00:09:41.267 ======================================================== 00:09:41.267 Total : 332.44 0.16 913834.14 595.03 2004896.21 00:09:41.267 00:09:41.267 [2024-11-25 13:08:38.774081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e99a0 (9): Bad file descriptor 00:09:41.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:41.267 13:08:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.267 13:08:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:41.267 13:08:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3069812 00:09:41.267 13:08:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3069812 00:09:41.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3069812) - No such process 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3069812 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3069812 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3069812 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.832 [2024-11-25 13:08:39.295528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3070229 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3070229 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:41.832 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:41.832 [2024-11-25 13:08:39.369212] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:42.397 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:42.398 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3070229 00:09:42.398 13:08:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:42.962 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:42.962 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3070229 00:09:42.962 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:43.220 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.221 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3070229 00:09:43.221 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:43.786 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:43.786 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3070229 00:09:43.786 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.356 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.356 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3070229 00:09:44.356 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.922 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:44.922 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3070229 00:09:44.922 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:44.922 Initializing NVMe Controllers 00:09:44.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:44.922 Controller IO queue size 128, less than required. 00:09:44.922 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:44.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:44.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:44.922 Initialization complete. Launching workers. 00:09:44.922 ======================================================== 00:09:44.922 Latency(us) 00:09:44.922 Device Information : IOPS MiB/s Average min max 00:09:44.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003063.53 1000217.54 1011063.24 00:09:44.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005648.25 1000157.68 1013920.13 00:09:44.922 ======================================================== 00:09:44.922 Total : 256.00 0.12 1004355.89 1000157.68 1013920.13 00:09:44.922 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3070229 00:09:45.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3070229) - No such process 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3070229 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.180 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.180 rmmod nvme_tcp 00:09:45.438 rmmod nvme_fabrics 00:09:45.438 rmmod nvme_keyring 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3069788 ']' 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3069788 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3069788 ']' 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3069788 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3069788 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3069788' 00:09:45.438 killing process with pid 3069788 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3069788 00:09:45.438 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3069788 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.699 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.609 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.609 00:09:47.609 real 0m12.473s 00:09:47.609 user 0m27.927s 00:09:47.609 sys 0m3.014s 00:09:47.609 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.609 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.609 ************************************ 00:09:47.609 END TEST nvmf_delete_subsystem 00:09:47.609 ************************************ 00:09:47.609 13:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:47.609 13:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:47.609 13:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.609 13:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.609 ************************************ 00:09:47.609 START TEST nvmf_host_management 00:09:47.609 ************************************ 00:09:47.609 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:47.869 * Looking for test storage... 00:09:47.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.869 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:47.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.869 --rc genhtml_branch_coverage=1 00:09:47.869 --rc genhtml_function_coverage=1 00:09:47.869 --rc genhtml_legend=1 00:09:47.870 --rc geninfo_all_blocks=1 00:09:47.870 --rc geninfo_unexecuted_blocks=1 00:09:47.870 00:09:47.870 ' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:47.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.870 --rc genhtml_branch_coverage=1 00:09:47.870 --rc genhtml_function_coverage=1 00:09:47.870 --rc genhtml_legend=1 00:09:47.870 --rc geninfo_all_blocks=1 00:09:47.870 --rc geninfo_unexecuted_blocks=1 00:09:47.870 00:09:47.870 ' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:47.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.870 --rc genhtml_branch_coverage=1 00:09:47.870 --rc genhtml_function_coverage=1 00:09:47.870 --rc genhtml_legend=1 00:09:47.870 --rc geninfo_all_blocks=1 00:09:47.870 --rc geninfo_unexecuted_blocks=1 00:09:47.870 00:09:47.870 ' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:47.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.870 --rc genhtml_branch_coverage=1 00:09:47.870 --rc genhtml_function_coverage=1 00:09:47.870 --rc genhtml_legend=1 00:09:47.870 --rc geninfo_all_blocks=1 00:09:47.870 --rc geninfo_unexecuted_blocks=1 00:09:47.870 00:09:47.870 ' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.870 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:50.403 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:50.404 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:50.404 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:50.404 Found net devices under 0000:09:00.0: cvl_0_0 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:50.404 Found net devices under 0000:09:00.1: cvl_0_1 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:50.404 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:09:50.405 00:09:50.405 --- 10.0.0.2 ping statistics --- 00:09:50.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.405 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:09:50.405 00:09:50.405 --- 10.0.0.1 ping statistics --- 00:09:50.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.405 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3072705 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3072705 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3072705 ']' 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.405 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.405 [2024-11-25 13:08:47.847374] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:09:50.405 [2024-11-25 13:08:47.847464] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.405 [2024-11-25 13:08:47.920947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.405 [2024-11-25 13:08:47.982444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.405 [2024-11-25 13:08:47.982498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.405 [2024-11-25 13:08:47.982527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.405 [2024-11-25 13:08:47.982539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.405 [2024-11-25 13:08:47.982549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.405 [2024-11-25 13:08:47.984199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.405 [2024-11-25 13:08:47.984265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.405 [2024-11-25 13:08:47.984330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:50.405 [2024-11-25 13:08:47.984334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.664 [2024-11-25 13:08:48.140200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.664 Malloc0 00:09:50.664 [2024-11-25 13:08:48.220128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3072756 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3072756 /var/tmp/bdevperf.sock 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3072756 ']' 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:50.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:50.664 { 00:09:50.664 "params": { 00:09:50.664 "name": "Nvme$subsystem", 00:09:50.664 "trtype": "$TEST_TRANSPORT", 00:09:50.664 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:50.664 "adrfam": "ipv4", 00:09:50.664 "trsvcid": "$NVMF_PORT", 00:09:50.664 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:50.664 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:50.664 "hdgst": ${hdgst:-false}, 00:09:50.664 "ddgst": ${ddgst:-false} 00:09:50.664 }, 00:09:50.664 "method": "bdev_nvme_attach_controller" 00:09:50.664 } 00:09:50.664 EOF 00:09:50.664 )") 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:50.664 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:50.665 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:50.665 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:50.665 "params": { 00:09:50.665 "name": "Nvme0", 00:09:50.665 "trtype": "tcp", 00:09:50.665 "traddr": "10.0.0.2", 00:09:50.665 "adrfam": "ipv4", 00:09:50.665 "trsvcid": "4420", 00:09:50.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.665 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:50.665 "hdgst": false, 00:09:50.665 "ddgst": false 00:09:50.665 }, 00:09:50.665 "method": "bdev_nvme_attach_controller" 00:09:50.665 }' 00:09:50.665 [2024-11-25 13:08:48.298647] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:09:50.665 [2024-11-25 13:08:48.298724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072756 ] 00:09:50.923 [2024-11-25 13:08:48.370635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.923 [2024-11-25 13:08:48.431224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.181 Running I/O for 10 seconds... 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:51.181 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=547 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 547 -ge 100 ']' 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.440 [2024-11-25 13:08:49.054947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.055270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe02f90 is same with the state(6) to be set 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.440 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:51.440 [2024-11-25 13:08:49.063958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.440 [2024-11-25 13:08:49.064002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.440 [2024-11-25 13:08:49.064031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.440 [2024-11-25 13:08:49.064048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.440 [2024-11-25 13:08:49.064062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.440 [2024-11-25 13:08:49.064075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.440 [2024-11-25 13:08:49.064089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:51.440 [2024-11-25 13:08:49.064102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.440 [2024-11-25 13:08:49.064115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ada80 is same with the state(6) to be set 00:09:51.440 [2024-11-25 13:08:49.064466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.440 [2024-11-25 13:08:49.064493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.440 [2024-11-25 13:08:49.064519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.440 [2024-11-25 13:08:49.064535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.440 [2024-11-25 13:08:49.064551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.440 [2024-11-25 13:08:49.064565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.440 [2024-11-25 13:08:49.064586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.440 [2024-11-25 13:08:49.064602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.440 [2024-11-25 13:08:49.064639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.440 [2024-11-25 13:08:49.064652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.064984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.064998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.441 [2024-11-25 13:08:49.065598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.441 [2024-11-25 13:08:49.065626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.065976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.065989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 [2024-11-25 13:08:49.066371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:51.442 [2024-11-25 13:08:49.066384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:51.442 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.443 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:51.443 [2024-11-25 13:08:49.067596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:51.443 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:51.443 00:09:51.443 Latency(us) 00:09:51.443 [2024-11-25T12:08:49.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.443 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:51.443 Job: Nvme0n1 ended in about 0.41 seconds with error 00:09:51.443 Verification LBA range: start 0x0 length 0x400 00:09:51.443 Nvme0n1 : 0.41 1560.72 97.55 156.07 0.00 36219.85 2852.03 34758.35 00:09:51.443 [2024-11-25T12:08:49.102Z] =================================================================================================================== 00:09:51.443 [2024-11-25T12:08:49.102Z] Total : 1560.72 97.55 156.07 0.00 36219.85 2852.03 34758.35 00:09:51.443 [2024-11-25 13:08:49.069528] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:51.443 [2024-11-25 13:08:49.069558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6ada80 (9): Bad file descriptor 00:09:51.699 [2024-11-25 13:08:49.115749] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3072756 00:09:52.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3072756) - No such process 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:52.632 { 00:09:52.632 "params": { 00:09:52.632 "name": "Nvme$subsystem", 00:09:52.632 "trtype": "$TEST_TRANSPORT", 00:09:52.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.632 "adrfam": "ipv4", 00:09:52.632 "trsvcid": "$NVMF_PORT", 00:09:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.632 "hdgst": ${hdgst:-false}, 00:09:52.632 "ddgst": ${ddgst:-false} 00:09:52.632 }, 00:09:52.632 "method": "bdev_nvme_attach_controller" 00:09:52.632 } 00:09:52.632 EOF 00:09:52.632 )") 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:52.632 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:52.632 "params": { 00:09:52.632 "name": "Nvme0", 00:09:52.632 "trtype": "tcp", 00:09:52.632 "traddr": "10.0.0.2", 00:09:52.632 "adrfam": "ipv4", 00:09:52.632 "trsvcid": "4420", 00:09:52.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:52.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:52.632 "hdgst": false, 00:09:52.632 "ddgst": false 00:09:52.632 }, 00:09:52.632 "method": "bdev_nvme_attach_controller" 00:09:52.632 }' 00:09:52.632 [2024-11-25 13:08:50.116916] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:09:52.632 [2024-11-25 13:08:50.116990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073028 ] 00:09:52.632 [2024-11-25 13:08:50.189437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.632 [2024-11-25 13:08:50.253248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.890 Running I/O for 1 seconds... 00:09:54.082 1647.00 IOPS, 102.94 MiB/s 00:09:54.083 Latency(us) 00:09:54.083 [2024-11-25T12:08:51.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.083 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:54.083 Verification LBA range: start 0x0 length 0x400 00:09:54.083 Nvme0n1 : 1.04 1661.12 103.82 0.00 0.00 37910.00 6019.60 34175.81 00:09:54.083 [2024-11-25T12:08:51.742Z] =================================================================================================================== 00:09:54.083 [2024-11-25T12:08:51.742Z] Total : 1661.12 103.82 0.00 0.00 37910.00 6019.60 34175.81 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.083 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.083 rmmod nvme_tcp 00:09:54.340 rmmod nvme_fabrics 00:09:54.340 rmmod nvme_keyring 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3072705 ']' 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3072705 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3072705 ']' 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3072705 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072705 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072705' 00:09:54.340 killing process with pid 3072705 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3072705 00:09:54.340 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3072705 00:09:54.599 [2024-11-25 13:08:52.045898] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.599 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.507 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.507 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:56.507 00:09:56.507 real 0m8.879s 00:09:56.507 user 0m19.503s 00:09:56.507 sys 0m2.791s 00:09:56.507 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.507 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:56.507 ************************************ 00:09:56.507 END TEST nvmf_host_management 00:09:56.507 ************************************ 00:09:56.507 13:08:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:56.507 13:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.507 13:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.507 13:08:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.766 ************************************ 00:09:56.766 START TEST nvmf_lvol 00:09:56.766 ************************************ 00:09:56.766 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:56.766 * Looking for test storage... 00:09:56.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.767 --rc genhtml_branch_coverage=1 00:09:56.767 --rc genhtml_function_coverage=1 00:09:56.767 --rc genhtml_legend=1 00:09:56.767 --rc geninfo_all_blocks=1 00:09:56.767 --rc geninfo_unexecuted_blocks=1 00:09:56.767 00:09:56.767 ' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.767 --rc genhtml_branch_coverage=1 00:09:56.767 --rc genhtml_function_coverage=1 00:09:56.767 --rc genhtml_legend=1 00:09:56.767 --rc geninfo_all_blocks=1 00:09:56.767 --rc geninfo_unexecuted_blocks=1 00:09:56.767 00:09:56.767 ' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.767 --rc genhtml_branch_coverage=1 00:09:56.767 --rc genhtml_function_coverage=1 00:09:56.767 --rc genhtml_legend=1 00:09:56.767 --rc geninfo_all_blocks=1 00:09:56.767 --rc geninfo_unexecuted_blocks=1 00:09:56.767 00:09:56.767 ' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.767 --rc genhtml_branch_coverage=1 00:09:56.767 --rc genhtml_function_coverage=1 00:09:56.767 --rc genhtml_legend=1 00:09:56.767 --rc geninfo_all_blocks=1 00:09:56.767 --rc geninfo_unexecuted_blocks=1 00:09:56.767 00:09:56.767 ' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:56.767 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:56.768 13:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:59.301 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.301 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.301 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.301 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.301 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:59.302 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:59.302 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:59.302 Found net devices under 0000:09:00.0: cvl_0_0 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:59.302 Found net devices under 0000:09:00.1: cvl_0_1 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:09:59.302 00:09:59.302 --- 10.0.0.2 ping statistics --- 00:09:59.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.302 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:09:59.302 00:09:59.302 --- 10.0.0.1 ping statistics --- 00:09:59.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.302 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3075242 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3075242 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3075242 ']' 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.302 13:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:59.302 [2024-11-25 13:08:56.796637] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:09:59.302 [2024-11-25 13:08:56.796720] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.302 [2024-11-25 13:08:56.867948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.302 [2024-11-25 13:08:56.927277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.302 [2024-11-25 13:08:56.927362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.302 [2024-11-25 13:08:56.927392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.302 [2024-11-25 13:08:56.927404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.302 [2024-11-25 13:08:56.927414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.302 [2024-11-25 13:08:56.928995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.302 [2024-11-25 13:08:56.929122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.302 [2024-11-25 13:08:56.929126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.560 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.560 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:59.560 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.560 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.560 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:59.560 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.560 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:59.817 [2024-11-25 13:08:57.317844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.818 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.106 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:00.106 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:00.364 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:00.364 13:08:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:00.621 13:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:00.878 13:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=18c81e3e-ecc0-45ff-908a-11c0a4bef21a 00:10:00.878 13:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 18c81e3e-ecc0-45ff-908a-11c0a4bef21a lvol 20 00:10:01.135 13:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9558ef88-b4d8-495f-ad72-4762fbebdb06 00:10:01.135 13:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:01.705 13:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9558ef88-b4d8-495f-ad72-4762fbebdb06 00:10:01.705 13:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:01.962 [2024-11-25 13:08:59.567130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.962 13:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:02.219 13:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3075564 00:10:02.219 13:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:02.219 13:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:03.594 13:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9558ef88-b4d8-495f-ad72-4762fbebdb06 MY_SNAPSHOT 00:10:03.594 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c435e4f6-b18f-47aa-a31f-ba73a0448e4b 00:10:03.594 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9558ef88-b4d8-495f-ad72-4762fbebdb06 30 00:10:04.160 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c435e4f6-b18f-47aa-a31f-ba73a0448e4b MY_CLONE 00:10:04.418 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=89e2feb6-5ffd-404b-b321-31ade032827e 00:10:04.418 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 89e2feb6-5ffd-404b-b321-31ade032827e 00:10:04.984 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3075564 00:10:13.174 Initializing NVMe Controllers 00:10:13.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:13.174 Controller IO queue size 128, less than required. 00:10:13.174 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:13.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:13.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:13.174 Initialization complete. Launching workers. 00:10:13.174 ======================================================== 00:10:13.174 Latency(us) 00:10:13.174 Device Information : IOPS MiB/s Average min max 00:10:13.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10227.76 39.95 12515.14 646.46 78866.47 00:10:13.174 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10264.96 40.10 12476.04 2045.80 76407.31 00:10:13.174 ======================================================== 00:10:13.174 Total : 20492.72 80.05 12495.55 646.46 78866.47 00:10:13.174 00:10:13.174 13:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:13.174 13:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9558ef88-b4d8-495f-ad72-4762fbebdb06 00:10:13.432 13:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18c81e3e-ecc0-45ff-908a-11c0a4bef21a 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.690 rmmod nvme_tcp 00:10:13.690 rmmod nvme_fabrics 00:10:13.690 rmmod nvme_keyring 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3075242 ']' 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3075242 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3075242 ']' 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3075242 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3075242 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3075242' 00:10:13.690 killing process with pid 3075242 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3075242 00:10:13.690 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3075242 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.949 13:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.488 00:10:16.488 real 0m19.393s 00:10:16.488 user 1m6.144s 00:10:16.488 sys 0m5.472s 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:16.488 ************************************ 00:10:16.488 END TEST nvmf_lvol 00:10:16.488 ************************************ 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.488 ************************************ 00:10:16.488 START TEST nvmf_lvs_grow 00:10:16.488 ************************************ 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:16.488 * Looking for test storage... 00:10:16.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:16.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.488 --rc genhtml_branch_coverage=1 00:10:16.488 --rc genhtml_function_coverage=1 00:10:16.488 --rc genhtml_legend=1 00:10:16.488 --rc geninfo_all_blocks=1 00:10:16.488 --rc geninfo_unexecuted_blocks=1 00:10:16.488 00:10:16.488 ' 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:16.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.488 --rc genhtml_branch_coverage=1 00:10:16.488 --rc genhtml_function_coverage=1 00:10:16.488 --rc genhtml_legend=1 00:10:16.488 --rc geninfo_all_blocks=1 00:10:16.488 --rc geninfo_unexecuted_blocks=1 00:10:16.488 00:10:16.488 ' 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:16.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.488 --rc genhtml_branch_coverage=1 00:10:16.488 --rc genhtml_function_coverage=1 00:10:16.488 --rc genhtml_legend=1 00:10:16.488 --rc geninfo_all_blocks=1 00:10:16.488 --rc geninfo_unexecuted_blocks=1 00:10:16.488 00:10:16.488 ' 00:10:16.488 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:16.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.489 --rc genhtml_branch_coverage=1 00:10:16.489 --rc genhtml_function_coverage=1 00:10:16.489 --rc genhtml_legend=1 00:10:16.489 --rc geninfo_all_blocks=1 00:10:16.489 --rc geninfo_unexecuted_blocks=1 00:10:16.489 00:10:16.489 ' 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:16.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:16.489 13:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:18.399 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:18.399 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:18.399 Found net devices under 0000:09:00.0: cvl_0_0 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:18.399 Found net devices under 0000:09:00.1: cvl_0_1 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.399 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.400 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.400 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.400 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.400 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:10:18.400 00:10:18.400 --- 10.0.0.2 ping statistics --- 00:10:18.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.400 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:10:18.400 13:09:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:10:18.400 00:10:18.400 --- 10.0.0.1 ping statistics --- 00:10:18.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.400 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3079579 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3079579 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3079579 ']' 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.400 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:18.658 [2024-11-25 13:09:16.081902] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:10:18.659 [2024-11-25 13:09:16.081992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.659 [2024-11-25 13:09:16.155773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.659 [2024-11-25 13:09:16.214129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.659 [2024-11-25 13:09:16.214182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.659 [2024-11-25 13:09:16.214210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.659 [2024-11-25 13:09:16.214221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.659 [2024-11-25 13:09:16.214232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.659 [2024-11-25 13:09:16.214900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.917 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.917 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:18.917 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.917 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.917 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:18.917 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.917 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:19.176 [2024-11-25 13:09:16.601052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:19.176 ************************************ 00:10:19.176 START TEST lvs_grow_clean 00:10:19.176 ************************************ 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:19.176 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:19.433 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:19.433 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:19.692 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:19.692 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:19.692 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:19.951 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:19.951 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:19.951 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 lvol 150 00:10:20.209 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0b950515-ab8a-4eae-a5a0-d1b9a5250f6c 00:10:20.209 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:20.209 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:20.467 [2024-11-25 13:09:18.025722] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:20.467 [2024-11-25 13:09:18.025812] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:20.467 true 00:10:20.467 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:20.467 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:20.724 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:20.724 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:20.982 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0b950515-ab8a-4eae-a5a0-d1b9a5250f6c 00:10:21.240 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:21.499 [2024-11-25 13:09:19.113009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.499 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3080019 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3080019 /var/tmp/bdevperf.sock 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3080019 ']' 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:21.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.756 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:22.014 [2024-11-25 13:09:19.436142] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:10:22.014 [2024-11-25 13:09:19.436224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080019 ] 00:10:22.014 [2024-11-25 13:09:19.501856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.014 [2024-11-25 13:09:19.560985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.272 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.272 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:22.272 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:22.529 Nvme0n1 00:10:22.529 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:22.787 [ 00:10:22.787 { 00:10:22.787 "name": "Nvme0n1", 00:10:22.787 "aliases": [ 00:10:22.787 "0b950515-ab8a-4eae-a5a0-d1b9a5250f6c" 00:10:22.787 ], 00:10:22.787 "product_name": "NVMe disk", 00:10:22.787 "block_size": 4096, 00:10:22.787 "num_blocks": 38912, 00:10:22.787 "uuid": "0b950515-ab8a-4eae-a5a0-d1b9a5250f6c", 00:10:22.787 "numa_id": 0, 00:10:22.787 "assigned_rate_limits": { 00:10:22.787 "rw_ios_per_sec": 0, 00:10:22.787 "rw_mbytes_per_sec": 0, 00:10:22.787 "r_mbytes_per_sec": 0, 00:10:22.787 "w_mbytes_per_sec": 0 00:10:22.787 }, 00:10:22.787 "claimed": false, 00:10:22.787 "zoned": false, 00:10:22.787 "supported_io_types": { 00:10:22.787 "read": true, 00:10:22.787 "write": true, 00:10:22.787 "unmap": true, 00:10:22.787 "flush": true, 00:10:22.787 "reset": true, 00:10:22.787 "nvme_admin": true, 00:10:22.787 "nvme_io": true, 00:10:22.787 "nvme_io_md": false, 00:10:22.787 "write_zeroes": true, 00:10:22.787 "zcopy": false, 00:10:22.787 "get_zone_info": false, 00:10:22.787 "zone_management": false, 00:10:22.787 "zone_append": false, 00:10:22.787 "compare": true, 00:10:22.787 "compare_and_write": true, 00:10:22.787 "abort": true, 00:10:22.787 "seek_hole": false, 00:10:22.787 "seek_data": false, 00:10:22.787 "copy": true, 00:10:22.787 "nvme_iov_md": false 00:10:22.787 }, 00:10:22.787 "memory_domains": [ 00:10:22.787 { 00:10:22.787 "dma_device_id": "system", 00:10:22.787 "dma_device_type": 1 00:10:22.787 } 00:10:22.787 ], 00:10:22.787 "driver_specific": { 00:10:22.787 "nvme": [ 00:10:22.787 { 00:10:22.787 "trid": { 00:10:22.787 "trtype": "TCP", 00:10:22.787 "adrfam": "IPv4", 00:10:22.787 "traddr": "10.0.0.2", 00:10:22.787 "trsvcid": "4420", 00:10:22.787 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:22.787 }, 00:10:22.787 "ctrlr_data": { 00:10:22.787 "cntlid": 1, 00:10:22.787 "vendor_id": "0x8086", 00:10:22.787 "model_number": "SPDK bdev Controller", 00:10:22.787 "serial_number": "SPDK0", 00:10:22.787 "firmware_revision": "25.01", 00:10:22.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:22.787 "oacs": { 00:10:22.787 "security": 0, 00:10:22.787 "format": 0, 00:10:22.787 "firmware": 0, 00:10:22.787 "ns_manage": 0 00:10:22.787 }, 00:10:22.787 "multi_ctrlr": true, 00:10:22.787 "ana_reporting": false 00:10:22.787 }, 00:10:22.787 "vs": { 00:10:22.787 "nvme_version": "1.3" 00:10:22.787 }, 00:10:22.787 "ns_data": { 00:10:22.787 "id": 1, 00:10:22.787 "can_share": true 00:10:22.787 } 00:10:22.787 } 00:10:22.787 ], 00:10:22.787 "mp_policy": "active_passive" 00:10:22.787 } 00:10:22.787 } 00:10:22.787 ] 00:10:23.045 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3080155 00:10:23.045 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:23.045 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:23.045 Running I/O for 10 seconds... 00:10:23.979 Latency(us) 00:10:23.979 [2024-11-25T12:09:21.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:23.979 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:10:23.979 [2024-11-25T12:09:21.638Z] =================================================================================================================== 00:10:23.979 [2024-11-25T12:09:21.638Z] Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:10:23.979 00:10:24.912 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:25.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.170 Nvme0n1 : 2.00 15494.50 60.53 0.00 0.00 0.00 0.00 0.00 00:10:25.170 [2024-11-25T12:09:22.829Z] =================================================================================================================== 00:10:25.170 [2024-11-25T12:09:22.829Z] Total : 15494.50 60.53 0.00 0.00 0.00 0.00 0.00 00:10:25.170 00:10:25.170 true 00:10:25.170 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:25.170 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:25.429 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:25.429 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:25.429 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3080155 00:10:25.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:25.995 Nvme0n1 : 3.00 15579.00 60.86 0.00 0.00 0.00 0.00 0.00 00:10:25.995 [2024-11-25T12:09:23.654Z] =================================================================================================================== 00:10:25.995 [2024-11-25T12:09:23.654Z] Total : 15579.00 60.86 0.00 0.00 0.00 0.00 0.00 00:10:25.995 00:10:26.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:26.930 Nvme0n1 : 4.00 15684.75 61.27 0.00 0.00 0.00 0.00 0.00 00:10:26.930 [2024-11-25T12:09:24.589Z] =================================================================================================================== 00:10:26.930 [2024-11-25T12:09:24.589Z] Total : 15684.75 61.27 0.00 0.00 0.00 0.00 0.00 00:10:26.930 00:10:28.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.304 Nvme0n1 : 5.00 15736.20 61.47 0.00 0.00 0.00 0.00 0.00 00:10:28.304 [2024-11-25T12:09:25.963Z] =================================================================================================================== 00:10:28.304 [2024-11-25T12:09:25.963Z] Total : 15736.20 61.47 0.00 0.00 0.00 0.00 0.00 00:10:28.304 00:10:29.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.238 Nvme0n1 : 6.00 15801.83 61.73 0.00 0.00 0.00 0.00 0.00 00:10:29.238 [2024-11-25T12:09:26.897Z] =================================================================================================================== 00:10:29.238 [2024-11-25T12:09:26.897Z] Total : 15801.83 61.73 0.00 0.00 0.00 0.00 0.00 00:10:29.238 00:10:30.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.172 Nvme0n1 : 7.00 15840.57 61.88 0.00 0.00 0.00 0.00 0.00 00:10:30.172 [2024-11-25T12:09:27.831Z] =================================================================================================================== 00:10:30.172 [2024-11-25T12:09:27.831Z] Total : 15840.57 61.88 0.00 0.00 0.00 0.00 0.00 00:10:30.172 00:10:31.106 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.106 Nvme0n1 : 8.00 15884.62 62.05 0.00 0.00 0.00 0.00 0.00 00:10:31.106 [2024-11-25T12:09:28.765Z] =================================================================================================================== 00:10:31.106 [2024-11-25T12:09:28.765Z] Total : 15884.62 62.05 0.00 0.00 0.00 0.00 0.00 00:10:31.106 00:10:32.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.040 Nvme0n1 : 9.00 15918.78 62.18 0.00 0.00 0.00 0.00 0.00 00:10:32.040 [2024-11-25T12:09:29.699Z] =================================================================================================================== 00:10:32.040 [2024-11-25T12:09:29.699Z] Total : 15918.78 62.18 0.00 0.00 0.00 0.00 0.00 00:10:32.040 00:10:32.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.973 Nvme0n1 : 10.00 15914.40 62.17 0.00 0.00 0.00 0.00 0.00 00:10:32.973 [2024-11-25T12:09:30.632Z] =================================================================================================================== 00:10:32.973 [2024-11-25T12:09:30.632Z] Total : 15914.40 62.17 0.00 0.00 0.00 0.00 0.00 00:10:32.973 00:10:32.973 00:10:32.973 Latency(us) 00:10:32.973 [2024-11-25T12:09:30.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.973 Nvme0n1 : 10.01 15915.33 62.17 0.00 0.00 8038.09 4781.70 18252.99 00:10:32.973 [2024-11-25T12:09:30.632Z] =================================================================================================================== 00:10:32.973 [2024-11-25T12:09:30.632Z] Total : 15915.33 62.17 0.00 0.00 8038.09 4781.70 18252.99 00:10:32.973 { 00:10:32.973 "results": [ 00:10:32.973 { 00:10:32.973 "job": "Nvme0n1", 00:10:32.973 "core_mask": "0x2", 00:10:32.973 "workload": "randwrite", 00:10:32.973 "status": "finished", 00:10:32.973 "queue_depth": 128, 00:10:32.973 "io_size": 4096, 00:10:32.973 "runtime": 10.007458, 00:10:32.973 "iops": 15915.330346627485, 00:10:32.973 "mibps": 62.16925916651361, 00:10:32.973 "io_failed": 0, 00:10:32.973 "io_timeout": 0, 00:10:32.973 "avg_latency_us": 8038.08821456144, 00:10:32.973 "min_latency_us": 4781.700740740741, 00:10:32.973 "max_latency_us": 18252.98962962963 00:10:32.973 } 00:10:32.973 ], 00:10:32.973 "core_count": 1 00:10:32.973 } 00:10:32.973 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3080019 00:10:32.973 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3080019 ']' 00:10:32.973 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3080019 00:10:32.973 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:32.973 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.973 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3080019 00:10:33.233 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:33.233 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:33.233 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3080019' 00:10:33.233 killing process with pid 3080019 00:10:33.233 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3080019 00:10:33.233 Received shutdown signal, test time was about 10.000000 seconds 00:10:33.233 00:10:33.233 Latency(us) 00:10:33.233 [2024-11-25T12:09:30.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:33.233 [2024-11-25T12:09:30.892Z] =================================================================================================================== 00:10:33.233 [2024-11-25T12:09:30.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:33.233 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3080019 00:10:33.233 13:09:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:33.800 13:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:33.800 13:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:33.800 13:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:34.072 13:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:34.072 13:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:34.072 13:09:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:34.391 [2024-11-25 13:09:31.971166] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:34.391 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:34.650 request: 00:10:34.650 { 00:10:34.650 "uuid": "42c88389-a492-4ad3-b01a-3dda19cab2a4", 00:10:34.650 "method": "bdev_lvol_get_lvstores", 00:10:34.650 "req_id": 1 00:10:34.650 } 00:10:34.650 Got JSON-RPC error response 00:10:34.650 response: 00:10:34.650 { 00:10:34.650 "code": -19, 00:10:34.650 "message": "No such device" 00:10:34.650 } 00:10:34.650 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:34.650 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:34.650 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:34.650 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:34.650 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:34.910 aio_bdev 00:10:34.910 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0b950515-ab8a-4eae-a5a0-d1b9a5250f6c 00:10:34.910 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=0b950515-ab8a-4eae-a5a0-d1b9a5250f6c 00:10:34.910 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.910 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:34.910 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.910 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.910 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:35.476 13:09:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0b950515-ab8a-4eae-a5a0-d1b9a5250f6c -t 2000 00:10:35.476 [ 00:10:35.476 { 00:10:35.476 "name": "0b950515-ab8a-4eae-a5a0-d1b9a5250f6c", 00:10:35.476 "aliases": [ 00:10:35.476 "lvs/lvol" 00:10:35.476 ], 00:10:35.476 "product_name": "Logical Volume", 00:10:35.476 "block_size": 4096, 00:10:35.476 "num_blocks": 38912, 00:10:35.476 "uuid": "0b950515-ab8a-4eae-a5a0-d1b9a5250f6c", 00:10:35.476 "assigned_rate_limits": { 00:10:35.476 "rw_ios_per_sec": 0, 00:10:35.476 "rw_mbytes_per_sec": 0, 00:10:35.476 "r_mbytes_per_sec": 0, 00:10:35.476 "w_mbytes_per_sec": 0 00:10:35.476 }, 00:10:35.476 "claimed": false, 00:10:35.476 "zoned": false, 00:10:35.476 "supported_io_types": { 00:10:35.476 "read": true, 00:10:35.476 "write": true, 00:10:35.476 "unmap": true, 00:10:35.476 "flush": false, 00:10:35.476 "reset": true, 00:10:35.476 "nvme_admin": false, 00:10:35.476 "nvme_io": false, 00:10:35.476 "nvme_io_md": false, 00:10:35.476 "write_zeroes": true, 00:10:35.476 "zcopy": false, 00:10:35.476 "get_zone_info": false, 00:10:35.476 "zone_management": false, 00:10:35.476 "zone_append": false, 00:10:35.476 "compare": false, 00:10:35.476 "compare_and_write": false, 00:10:35.476 "abort": false, 00:10:35.476 "seek_hole": true, 00:10:35.476 "seek_data": true, 00:10:35.476 "copy": false, 00:10:35.476 "nvme_iov_md": false 00:10:35.476 }, 00:10:35.476 "driver_specific": { 00:10:35.476 "lvol": { 00:10:35.476 "lvol_store_uuid": "42c88389-a492-4ad3-b01a-3dda19cab2a4", 00:10:35.476 "base_bdev": "aio_bdev", 00:10:35.476 "thin_provision": false, 00:10:35.476 "num_allocated_clusters": 38, 00:10:35.476 "snapshot": false, 00:10:35.476 "clone": false, 00:10:35.476 "esnap_clone": false 00:10:35.476 } 00:10:35.476 } 00:10:35.476 } 00:10:35.476 ] 00:10:35.476 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:35.476 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:35.476 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:35.734 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:35.734 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:35.734 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:35.991 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:35.991 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0b950515-ab8a-4eae-a5a0-d1b9a5250f6c 00:10:36.557 13:09:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42c88389-a492-4ad3-b01a-3dda19cab2a4 00:10:36.557 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:36.815 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:37.078 00:10:37.078 real 0m17.839s 00:10:37.078 user 0m17.358s 00:10:37.078 sys 0m1.898s 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:37.078 ************************************ 00:10:37.078 END TEST lvs_grow_clean 00:10:37.078 ************************************ 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:37.078 ************************************ 00:10:37.078 START TEST lvs_grow_dirty 00:10:37.078 ************************************ 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:37.078 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:37.337 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:37.337 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:37.595 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:37.595 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:37.595 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:37.853 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:37.853 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:37.853 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 lvol 150 00:10:38.111 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0bb3a749-0aa1-482d-aea1-b0d24b2630e2 00:10:38.111 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:38.111 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:38.369 [2024-11-25 13:09:35.878656] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:38.369 [2024-11-25 13:09:35.878763] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:38.369 true 00:10:38.369 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:38.369 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:38.628 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:38.628 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:38.887 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0bb3a749-0aa1-482d-aea1-b0d24b2630e2 00:10:39.145 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:39.404 [2024-11-25 13:09:36.949899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.404 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3082154 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3082154 /var/tmp/bdevperf.sock 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3082154 ']' 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:39.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.663 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:39.663 [2024-11-25 13:09:37.277703] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:10:39.663 [2024-11-25 13:09:37.277782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082154 ] 00:10:39.921 [2024-11-25 13:09:37.344417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.921 [2024-11-25 13:09:37.401529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.921 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.921 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:39.921 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:40.485 Nvme0n1 00:10:40.485 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:40.485 [ 00:10:40.485 { 00:10:40.485 "name": "Nvme0n1", 00:10:40.485 "aliases": [ 00:10:40.485 "0bb3a749-0aa1-482d-aea1-b0d24b2630e2" 00:10:40.485 ], 00:10:40.485 "product_name": "NVMe disk", 00:10:40.485 "block_size": 4096, 00:10:40.485 "num_blocks": 38912, 00:10:40.485 "uuid": "0bb3a749-0aa1-482d-aea1-b0d24b2630e2", 00:10:40.485 "numa_id": 0, 00:10:40.485 "assigned_rate_limits": { 00:10:40.485 "rw_ios_per_sec": 0, 00:10:40.485 "rw_mbytes_per_sec": 0, 00:10:40.485 "r_mbytes_per_sec": 0, 00:10:40.485 "w_mbytes_per_sec": 0 00:10:40.485 }, 00:10:40.485 "claimed": false, 00:10:40.485 "zoned": false, 00:10:40.485 "supported_io_types": { 00:10:40.485 "read": true, 00:10:40.485 "write": true, 00:10:40.485 "unmap": true, 00:10:40.485 "flush": true, 00:10:40.485 "reset": true, 00:10:40.485 "nvme_admin": true, 00:10:40.485 "nvme_io": true, 00:10:40.485 "nvme_io_md": false, 00:10:40.485 "write_zeroes": true, 00:10:40.485 "zcopy": false, 00:10:40.485 "get_zone_info": false, 00:10:40.485 "zone_management": false, 00:10:40.485 "zone_append": false, 00:10:40.485 "compare": true, 00:10:40.485 "compare_and_write": true, 00:10:40.485 "abort": true, 00:10:40.485 "seek_hole": false, 00:10:40.485 "seek_data": false, 00:10:40.485 "copy": true, 00:10:40.485 "nvme_iov_md": false 00:10:40.485 }, 00:10:40.485 "memory_domains": [ 00:10:40.485 { 00:10:40.485 "dma_device_id": "system", 00:10:40.485 "dma_device_type": 1 00:10:40.485 } 00:10:40.485 ], 00:10:40.485 "driver_specific": { 00:10:40.485 "nvme": [ 00:10:40.485 { 00:10:40.485 "trid": { 00:10:40.485 "trtype": "TCP", 00:10:40.485 "adrfam": "IPv4", 00:10:40.485 "traddr": "10.0.0.2", 00:10:40.485 "trsvcid": "4420", 00:10:40.485 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:40.485 }, 00:10:40.485 "ctrlr_data": { 00:10:40.485 "cntlid": 1, 00:10:40.485 "vendor_id": "0x8086", 00:10:40.485 "model_number": "SPDK bdev Controller", 00:10:40.485 "serial_number": "SPDK0", 00:10:40.485 "firmware_revision": "25.01", 00:10:40.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:40.485 "oacs": { 00:10:40.485 "security": 0, 00:10:40.485 "format": 0, 00:10:40.485 "firmware": 0, 00:10:40.485 "ns_manage": 0 00:10:40.485 }, 00:10:40.485 "multi_ctrlr": true, 00:10:40.485 "ana_reporting": false 00:10:40.485 }, 00:10:40.485 "vs": { 00:10:40.485 "nvme_version": "1.3" 00:10:40.485 }, 00:10:40.485 "ns_data": { 00:10:40.485 "id": 1, 00:10:40.485 "can_share": true 00:10:40.485 } 00:10:40.485 } 00:10:40.485 ], 00:10:40.485 "mp_policy": "active_passive" 00:10:40.485 } 00:10:40.485 } 00:10:40.485 ] 00:10:40.742 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3082253 00:10:40.742 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:40.743 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:40.743 Running I/O for 10 seconds... 00:10:41.698 Latency(us) 00:10:41.698 [2024-11-25T12:09:39.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:41.698 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:10:41.698 [2024-11-25T12:09:39.357Z] =================================================================================================================== 00:10:41.698 [2024-11-25T12:09:39.357Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:10:41.698 00:10:42.632 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:42.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:42.632 Nvme0n1 : 2.00 15163.50 59.23 0.00 0.00 0.00 0.00 0.00 00:10:42.632 [2024-11-25T12:09:40.291Z] =================================================================================================================== 00:10:42.632 [2024-11-25T12:09:40.291Z] Total : 15163.50 59.23 0.00 0.00 0.00 0.00 0.00 00:10:42.632 00:10:42.889 true 00:10:42.889 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:42.889 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:43.148 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:43.148 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:43.148 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3082253 00:10:43.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:43.713 Nvme0n1 : 3.00 15273.67 59.66 0.00 0.00 0.00 0.00 0.00 00:10:43.713 [2024-11-25T12:09:41.372Z] =================================================================================================================== 00:10:43.713 [2024-11-25T12:09:41.372Z] Total : 15273.67 59.66 0.00 0.00 0.00 0.00 0.00 00:10:43.713 00:10:44.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:44.644 Nvme0n1 : 4.00 15360.50 60.00 0.00 0.00 0.00 0.00 0.00 00:10:44.644 [2024-11-25T12:09:42.303Z] =================================================================================================================== 00:10:44.644 [2024-11-25T12:09:42.303Z] Total : 15360.50 60.00 0.00 0.00 0.00 0.00 0.00 00:10:44.644 00:10:46.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.017 Nvme0n1 : 5.00 15438.00 60.30 0.00 0.00 0.00 0.00 0.00 00:10:46.017 [2024-11-25T12:09:43.676Z] =================================================================================================================== 00:10:46.017 [2024-11-25T12:09:43.676Z] Total : 15438.00 60.30 0.00 0.00 0.00 0.00 0.00 00:10:46.017 00:10:46.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.953 Nvme0n1 : 6.00 15510.83 60.59 0.00 0.00 0.00 0.00 0.00 00:10:46.953 [2024-11-25T12:09:44.612Z] =================================================================================================================== 00:10:46.953 [2024-11-25T12:09:44.612Z] Total : 15510.83 60.59 0.00 0.00 0.00 0.00 0.00 00:10:46.953 00:10:47.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.887 Nvme0n1 : 7.00 15554.29 60.76 0.00 0.00 0.00 0.00 0.00 00:10:47.887 [2024-11-25T12:09:45.546Z] =================================================================================================================== 00:10:47.887 [2024-11-25T12:09:45.546Z] Total : 15554.29 60.76 0.00 0.00 0.00 0.00 0.00 00:10:47.887 00:10:48.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.821 Nvme0n1 : 8.00 15590.62 60.90 0.00 0.00 0.00 0.00 0.00 00:10:48.821 [2024-11-25T12:09:46.480Z] =================================================================================================================== 00:10:48.821 [2024-11-25T12:09:46.480Z] Total : 15590.62 60.90 0.00 0.00 0.00 0.00 0.00 00:10:48.821 00:10:49.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.753 Nvme0n1 : 9.00 15622.56 61.03 0.00 0.00 0.00 0.00 0.00 00:10:49.753 [2024-11-25T12:09:47.412Z] =================================================================================================================== 00:10:49.753 [2024-11-25T12:09:47.412Z] Total : 15622.56 61.03 0.00 0.00 0.00 0.00 0.00 00:10:49.753 00:10:50.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.687 Nvme0n1 : 10.00 15647.80 61.12 0.00 0.00 0.00 0.00 0.00 00:10:50.687 [2024-11-25T12:09:48.346Z] =================================================================================================================== 00:10:50.687 [2024-11-25T12:09:48.346Z] Total : 15647.80 61.12 0.00 0.00 0.00 0.00 0.00 00:10:50.687 00:10:50.687 00:10:50.687 Latency(us) 00:10:50.687 [2024-11-25T12:09:48.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.687 Nvme0n1 : 10.01 15647.43 61.12 0.00 0.00 8175.86 2487.94 16019.91 00:10:50.687 [2024-11-25T12:09:48.346Z] =================================================================================================================== 00:10:50.687 [2024-11-25T12:09:48.346Z] Total : 15647.43 61.12 0.00 0.00 8175.86 2487.94 16019.91 00:10:50.687 { 00:10:50.687 "results": [ 00:10:50.687 { 00:10:50.687 "job": "Nvme0n1", 00:10:50.687 "core_mask": "0x2", 00:10:50.687 "workload": "randwrite", 00:10:50.687 "status": "finished", 00:10:50.687 "queue_depth": 128, 00:10:50.687 "io_size": 4096, 00:10:50.687 "runtime": 10.008416, 00:10:50.687 "iops": 15647.431121967751, 00:10:50.687 "mibps": 61.12277782018653, 00:10:50.687 "io_failed": 0, 00:10:50.687 "io_timeout": 0, 00:10:50.687 "avg_latency_us": 8175.857755679386, 00:10:50.687 "min_latency_us": 2487.9407407407407, 00:10:50.687 "max_latency_us": 16019.91111111111 00:10:50.687 } 00:10:50.687 ], 00:10:50.687 "core_count": 1 00:10:50.687 } 00:10:50.687 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3082154 00:10:50.687 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3082154 ']' 00:10:50.687 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3082154 00:10:50.687 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:50.687 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.687 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3082154 00:10:50.945 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:50.945 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:50.945 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3082154' 00:10:50.945 killing process with pid 3082154 00:10:50.945 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3082154 00:10:50.945 Received shutdown signal, test time was about 10.000000 seconds 00:10:50.945 00:10:50.945 Latency(us) 00:10:50.945 [2024-11-25T12:09:48.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.945 [2024-11-25T12:09:48.604Z] =================================================================================================================== 00:10:50.945 [2024-11-25T12:09:48.604Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:50.945 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3082154 00:10:50.945 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:51.203 13:09:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:51.461 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:51.461 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:51.717 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:51.717 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:51.717 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3079579 00:10:51.717 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3079579 00:10:51.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3079579 Killed "${NVMF_APP[@]}" "$@" 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3083587 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3083587 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3083587 ']' 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.975 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:51.975 [2024-11-25 13:09:49.443442] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:10:51.976 [2024-11-25 13:09:49.443533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.976 [2024-11-25 13:09:49.518059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.976 [2024-11-25 13:09:49.576472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.976 [2024-11-25 13:09:49.576525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.976 [2024-11-25 13:09:49.576555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.976 [2024-11-25 13:09:49.576566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.976 [2024-11-25 13:09:49.576576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.976 [2024-11-25 13:09:49.577172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.234 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.234 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:52.234 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:52.234 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:52.234 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:52.234 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.234 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:52.491 [2024-11-25 13:09:49.978074] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:52.491 [2024-11-25 13:09:49.978216] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:52.491 [2024-11-25 13:09:49.978265] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:52.491 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:52.491 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0bb3a749-0aa1-482d-aea1-b0d24b2630e2 00:10:52.491 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0bb3a749-0aa1-482d-aea1-b0d24b2630e2 00:10:52.491 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.491 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:52.491 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.491 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.491 13:09:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:52.749 13:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0bb3a749-0aa1-482d-aea1-b0d24b2630e2 -t 2000 00:10:53.008 [ 00:10:53.008 { 00:10:53.008 "name": "0bb3a749-0aa1-482d-aea1-b0d24b2630e2", 00:10:53.008 "aliases": [ 00:10:53.008 "lvs/lvol" 00:10:53.008 ], 00:10:53.008 "product_name": "Logical Volume", 00:10:53.008 "block_size": 4096, 00:10:53.008 "num_blocks": 38912, 00:10:53.008 "uuid": "0bb3a749-0aa1-482d-aea1-b0d24b2630e2", 00:10:53.008 "assigned_rate_limits": { 00:10:53.008 "rw_ios_per_sec": 0, 00:10:53.008 "rw_mbytes_per_sec": 0, 00:10:53.008 "r_mbytes_per_sec": 0, 00:10:53.008 "w_mbytes_per_sec": 0 00:10:53.008 }, 00:10:53.008 "claimed": false, 00:10:53.008 "zoned": false, 00:10:53.008 "supported_io_types": { 00:10:53.008 "read": true, 00:10:53.008 "write": true, 00:10:53.008 "unmap": true, 00:10:53.008 "flush": false, 00:10:53.008 "reset": true, 00:10:53.008 "nvme_admin": false, 00:10:53.008 "nvme_io": false, 00:10:53.008 "nvme_io_md": false, 00:10:53.008 "write_zeroes": true, 00:10:53.008 "zcopy": false, 00:10:53.008 "get_zone_info": false, 00:10:53.008 "zone_management": false, 00:10:53.008 "zone_append": false, 00:10:53.008 "compare": false, 00:10:53.008 "compare_and_write": false, 00:10:53.008 "abort": false, 00:10:53.008 "seek_hole": true, 00:10:53.008 "seek_data": true, 00:10:53.008 "copy": false, 00:10:53.008 "nvme_iov_md": false 00:10:53.008 }, 00:10:53.008 "driver_specific": { 00:10:53.008 "lvol": { 00:10:53.008 "lvol_store_uuid": "8099ac6a-9fbb-4686-8c50-e83d9aa59ab1", 00:10:53.008 "base_bdev": "aio_bdev", 00:10:53.008 "thin_provision": false, 00:10:53.008 "num_allocated_clusters": 38, 00:10:53.008 "snapshot": false, 00:10:53.008 "clone": false, 00:10:53.008 "esnap_clone": false 00:10:53.008 } 00:10:53.008 } 00:10:53.008 } 00:10:53.008 ] 00:10:53.008 13:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:53.008 13:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:53.008 13:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:53.266 13:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:53.266 13:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:53.266 13:09:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:53.525 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:53.525 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:53.784 [2024-11-25 13:09:51.343713] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:53.784 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:54.043 request: 00:10:54.043 { 00:10:54.043 "uuid": "8099ac6a-9fbb-4686-8c50-e83d9aa59ab1", 00:10:54.043 "method": "bdev_lvol_get_lvstores", 00:10:54.043 "req_id": 1 00:10:54.043 } 00:10:54.043 Got JSON-RPC error response 00:10:54.043 response: 00:10:54.043 { 00:10:54.043 "code": -19, 00:10:54.043 "message": "No such device" 00:10:54.043 } 00:10:54.043 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:54.043 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:54.043 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:54.043 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:54.043 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:54.301 aio_bdev 00:10:54.301 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0bb3a749-0aa1-482d-aea1-b0d24b2630e2 00:10:54.301 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0bb3a749-0aa1-482d-aea1-b0d24b2630e2 00:10:54.301 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:54.301 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:54.301 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:54.301 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:54.301 13:09:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:54.559 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0bb3a749-0aa1-482d-aea1-b0d24b2630e2 -t 2000 00:10:54.818 [ 00:10:54.818 { 00:10:54.818 "name": "0bb3a749-0aa1-482d-aea1-b0d24b2630e2", 00:10:54.818 "aliases": [ 00:10:54.818 "lvs/lvol" 00:10:54.818 ], 00:10:54.818 "product_name": "Logical Volume", 00:10:54.818 "block_size": 4096, 00:10:54.818 "num_blocks": 38912, 00:10:54.818 "uuid": "0bb3a749-0aa1-482d-aea1-b0d24b2630e2", 00:10:54.818 "assigned_rate_limits": { 00:10:54.818 "rw_ios_per_sec": 0, 00:10:54.818 "rw_mbytes_per_sec": 0, 00:10:54.818 "r_mbytes_per_sec": 0, 00:10:54.818 "w_mbytes_per_sec": 0 00:10:54.818 }, 00:10:54.818 "claimed": false, 00:10:54.818 "zoned": false, 00:10:54.818 "supported_io_types": { 00:10:54.818 "read": true, 00:10:54.818 "write": true, 00:10:54.818 "unmap": true, 00:10:54.818 "flush": false, 00:10:54.818 "reset": true, 00:10:54.818 "nvme_admin": false, 00:10:54.818 "nvme_io": false, 00:10:54.818 "nvme_io_md": false, 00:10:54.818 "write_zeroes": true, 00:10:54.818 "zcopy": false, 00:10:54.818 "get_zone_info": false, 00:10:54.818 "zone_management": false, 00:10:54.818 "zone_append": false, 00:10:54.818 "compare": false, 00:10:54.818 "compare_and_write": false, 00:10:54.818 "abort": false, 00:10:54.818 "seek_hole": true, 00:10:54.818 "seek_data": true, 00:10:54.818 "copy": false, 00:10:54.818 "nvme_iov_md": false 00:10:54.818 }, 00:10:54.818 "driver_specific": { 00:10:54.818 "lvol": { 00:10:54.818 "lvol_store_uuid": "8099ac6a-9fbb-4686-8c50-e83d9aa59ab1", 00:10:54.818 "base_bdev": "aio_bdev", 00:10:54.818 "thin_provision": false, 00:10:54.818 "num_allocated_clusters": 38, 00:10:54.818 "snapshot": false, 00:10:54.818 "clone": false, 00:10:54.818 "esnap_clone": false 00:10:54.818 } 00:10:54.818 } 00:10:54.818 } 00:10:54.818 ] 00:10:54.818 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:54.818 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:54.818 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:55.076 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:55.076 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:55.076 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:55.334 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:55.334 13:09:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0bb3a749-0aa1-482d-aea1-b0d24b2630e2 00:10:55.592 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8099ac6a-9fbb-4686-8c50-e83d9aa59ab1 00:10:56.158 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:56.158 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:56.158 00:10:56.158 real 0m19.269s 00:10:56.158 user 0m49.086s 00:10:56.158 sys 0m4.599s 00:10:56.158 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.158 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:56.158 ************************************ 00:10:56.158 END TEST lvs_grow_dirty 00:10:56.158 ************************************ 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:56.416 nvmf_trace.0 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.416 rmmod nvme_tcp 00:10:56.416 rmmod nvme_fabrics 00:10:56.416 rmmod nvme_keyring 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3083587 ']' 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3083587 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3083587 ']' 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3083587 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3083587 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3083587' 00:10:56.416 killing process with pid 3083587 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3083587 00:10:56.416 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3083587 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.676 13:09:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.587 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:58.587 00:10:58.587 real 0m42.606s 00:10:58.587 user 1m12.441s 00:10:58.587 sys 0m8.498s 00:10:58.587 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.587 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:58.587 ************************************ 00:10:58.587 END TEST nvmf_lvs_grow 00:10:58.587 ************************************ 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.873 ************************************ 00:10:58.873 START TEST nvmf_bdev_io_wait 00:10:58.873 ************************************ 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:58.873 * Looking for test storage... 00:10:58.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.873 --rc genhtml_branch_coverage=1 00:10:58.873 --rc genhtml_function_coverage=1 00:10:58.873 --rc genhtml_legend=1 00:10:58.873 --rc geninfo_all_blocks=1 00:10:58.873 --rc geninfo_unexecuted_blocks=1 00:10:58.873 00:10:58.873 ' 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.873 --rc genhtml_branch_coverage=1 00:10:58.873 --rc genhtml_function_coverage=1 00:10:58.873 --rc genhtml_legend=1 00:10:58.873 --rc geninfo_all_blocks=1 00:10:58.873 --rc geninfo_unexecuted_blocks=1 00:10:58.873 00:10:58.873 ' 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.873 --rc genhtml_branch_coverage=1 00:10:58.873 --rc genhtml_function_coverage=1 00:10:58.873 --rc genhtml_legend=1 00:10:58.873 --rc geninfo_all_blocks=1 00:10:58.873 --rc geninfo_unexecuted_blocks=1 00:10:58.873 00:10:58.873 ' 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:58.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.873 --rc genhtml_branch_coverage=1 00:10:58.873 --rc genhtml_function_coverage=1 00:10:58.873 --rc genhtml_legend=1 00:10:58.873 --rc geninfo_all_blocks=1 00:10:58.873 --rc geninfo_unexecuted_blocks=1 00:10:58.873 00:10:58.873 ' 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.873 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:58.874 13:09:56 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:01.409 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:01.409 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.409 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:01.410 Found net devices under 0000:09:00.0: cvl_0_0 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:01.410 Found net devices under 0000:09:00.1: cvl_0_1 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:11:01.410 00:11:01.410 --- 10.0.0.2 ping statistics --- 00:11:01.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.410 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:11:01.410 00:11:01.410 --- 10.0.0.1 ping statistics --- 00:11:01.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.410 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3086168 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3086168 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3086168 ']' 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.410 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.410 [2024-11-25 13:09:58.821564] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:01.410 [2024-11-25 13:09:58.821670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.410 [2024-11-25 13:09:58.895392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.410 [2024-11-25 13:09:58.957544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.410 [2024-11-25 13:09:58.957615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.410 [2024-11-25 13:09:58.957629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.410 [2024-11-25 13:09:58.957655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.410 [2024-11-25 13:09:58.957664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.411 [2024-11-25 13:09:58.959319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.411 [2024-11-25 13:09:58.959377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.411 [2024-11-25 13:09:58.959444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.411 [2024-11-25 13:09:58.959447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.411 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.411 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:01.411 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.411 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.411 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.670 [2024-11-25 13:09:59.153548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.670 Malloc0 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:01.670 [2024-11-25 13:09:59.205155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3086278 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:01.670 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:01.670 { 00:11:01.670 "params": { 00:11:01.670 "name": "Nvme$subsystem", 00:11:01.671 "trtype": "$TEST_TRANSPORT", 00:11:01.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.671 "adrfam": "ipv4", 00:11:01.671 "trsvcid": "$NVMF_PORT", 00:11:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.671 "hdgst": ${hdgst:-false}, 00:11:01.671 "ddgst": ${ddgst:-false} 00:11:01.671 }, 00:11:01.671 "method": "bdev_nvme_attach_controller" 00:11:01.671 } 00:11:01.671 EOF 00:11:01.671 )") 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3086280 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:01.671 { 00:11:01.671 "params": { 00:11:01.671 "name": "Nvme$subsystem", 00:11:01.671 "trtype": "$TEST_TRANSPORT", 00:11:01.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.671 "adrfam": "ipv4", 00:11:01.671 "trsvcid": "$NVMF_PORT", 00:11:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.671 "hdgst": ${hdgst:-false}, 00:11:01.671 "ddgst": ${ddgst:-false} 00:11:01.671 }, 00:11:01.671 "method": "bdev_nvme_attach_controller" 00:11:01.671 } 00:11:01.671 EOF 00:11:01.671 )") 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3086283 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3086287 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:01.671 { 00:11:01.671 "params": { 00:11:01.671 "name": "Nvme$subsystem", 00:11:01.671 "trtype": "$TEST_TRANSPORT", 00:11:01.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.671 "adrfam": "ipv4", 00:11:01.671 "trsvcid": "$NVMF_PORT", 00:11:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.671 "hdgst": ${hdgst:-false}, 00:11:01.671 "ddgst": ${ddgst:-false} 00:11:01.671 }, 00:11:01.671 "method": "bdev_nvme_attach_controller" 00:11:01.671 } 00:11:01.671 EOF 00:11:01.671 )") 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:01.671 { 00:11:01.671 "params": { 00:11:01.671 "name": "Nvme$subsystem", 00:11:01.671 "trtype": "$TEST_TRANSPORT", 00:11:01.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.671 "adrfam": "ipv4", 00:11:01.671 "trsvcid": "$NVMF_PORT", 00:11:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.671 "hdgst": ${hdgst:-false}, 00:11:01.671 "ddgst": ${ddgst:-false} 00:11:01.671 }, 00:11:01.671 "method": "bdev_nvme_attach_controller" 00:11:01.671 } 00:11:01.671 EOF 00:11:01.671 )") 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3086278 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:01.671 "params": { 00:11:01.671 "name": "Nvme1", 00:11:01.671 "trtype": "tcp", 00:11:01.671 "traddr": "10.0.0.2", 00:11:01.671 "adrfam": "ipv4", 00:11:01.671 "trsvcid": "4420", 00:11:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:01.671 "hdgst": false, 00:11:01.671 "ddgst": false 00:11:01.671 }, 00:11:01.671 "method": "bdev_nvme_attach_controller" 00:11:01.671 }' 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:01.671 "params": { 00:11:01.671 "name": "Nvme1", 00:11:01.671 "trtype": "tcp", 00:11:01.671 "traddr": "10.0.0.2", 00:11:01.671 "adrfam": "ipv4", 00:11:01.671 "trsvcid": "4420", 00:11:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:01.671 "hdgst": false, 00:11:01.671 "ddgst": false 00:11:01.671 }, 00:11:01.671 "method": "bdev_nvme_attach_controller" 00:11:01.671 }' 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:01.671 "params": { 00:11:01.671 "name": "Nvme1", 00:11:01.671 "trtype": "tcp", 00:11:01.671 "traddr": "10.0.0.2", 00:11:01.671 "adrfam": "ipv4", 00:11:01.671 "trsvcid": "4420", 00:11:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:01.671 "hdgst": false, 00:11:01.671 "ddgst": false 00:11:01.671 }, 00:11:01.671 "method": "bdev_nvme_attach_controller" 00:11:01.671 }' 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:01.671 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:01.671 "params": { 00:11:01.671 "name": "Nvme1", 00:11:01.671 "trtype": "tcp", 00:11:01.671 "traddr": "10.0.0.2", 00:11:01.671 "adrfam": "ipv4", 00:11:01.671 "trsvcid": "4420", 00:11:01.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:01.671 "hdgst": false, 00:11:01.672 "ddgst": false 00:11:01.672 }, 00:11:01.672 "method": "bdev_nvme_attach_controller" 00:11:01.672 }' 00:11:01.672 [2024-11-25 13:09:59.255092] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:01.672 [2024-11-25 13:09:59.255092] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:01.672 [2024-11-25 13:09:59.255185] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-25 13:09:59.255186] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:01.672 --proc-type=auto ] 00:11:01.672 [2024-11-25 13:09:59.257865] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:01.672 [2024-11-25 13:09:59.257865] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:01.672 [2024-11-25 13:09:59.257940] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-25 13:09:59.257940] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:01.672 --proc-type=auto ] 00:11:01.929 [2024-11-25 13:09:59.442574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.929 [2024-11-25 13:09:59.498736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:01.929 [2024-11-25 13:09:59.551268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.188 [2024-11-25 13:09:59.609056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:02.188 [2024-11-25 13:09:59.657381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.188 [2024-11-25 13:09:59.711848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.188 [2024-11-25 13:09:59.714983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:02.188 [2024-11-25 13:09:59.764676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:02.445 Running I/O for 1 seconds... 00:11:02.445 Running I/O for 1 seconds... 00:11:02.445 Running I/O for 1 seconds... 00:11:02.445 Running I/O for 1 seconds... 00:11:03.377 184448.00 IOPS, 720.50 MiB/s [2024-11-25T12:10:01.036Z] 10233.00 IOPS, 39.97 MiB/s 00:11:03.377 Latency(us) 00:11:03.377 [2024-11-25T12:10:01.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.377 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:03.377 Nvme1n1 : 1.00 184099.58 719.14 0.00 0.00 691.43 291.27 1893.26 00:11:03.377 [2024-11-25T12:10:01.036Z] =================================================================================================================== 00:11:03.377 [2024-11-25T12:10:01.036Z] Total : 184099.58 719.14 0.00 0.00 691.43 291.27 1893.26 00:11:03.377 00:11:03.378 Latency(us) 00:11:03.378 [2024-11-25T12:10:01.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.378 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:03.378 Nvme1n1 : 1.01 10272.61 40.13 0.00 0.00 12404.21 7136.14 19029.71 00:11:03.378 [2024-11-25T12:10:01.037Z] =================================================================================================================== 00:11:03.378 [2024-11-25T12:10:01.037Z] Total : 10272.61 40.13 0.00 0.00 12404.21 7136.14 19029.71 00:11:03.378 9436.00 IOPS, 36.86 MiB/s 00:11:03.378 Latency(us) 00:11:03.378 [2024-11-25T12:10:01.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.378 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:03.378 Nvme1n1 : 1.01 9502.32 37.12 0.00 0.00 13415.83 5412.79 23301.69 00:11:03.378 [2024-11-25T12:10:01.037Z] =================================================================================================================== 00:11:03.378 [2024-11-25T12:10:01.037Z] Total : 9502.32 37.12 0.00 0.00 13415.83 5412.79 23301.69 00:11:03.378 8308.00 IOPS, 32.45 MiB/s 00:11:03.378 Latency(us) 00:11:03.378 [2024-11-25T12:10:01.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.378 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:03.378 Nvme1n1 : 1.01 8377.27 32.72 0.00 0.00 15210.13 3737.98 23884.23 00:11:03.378 [2024-11-25T12:10:01.037Z] =================================================================================================================== 00:11:03.378 [2024-11-25T12:10:01.037Z] Total : 8377.27 32.72 0.00 0.00 15210.13 3737.98 23884.23 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3086280 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3086283 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3086287 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.636 rmmod nvme_tcp 00:11:03.636 rmmod nvme_fabrics 00:11:03.636 rmmod nvme_keyring 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3086168 ']' 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3086168 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3086168 ']' 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3086168 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.636 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3086168 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3086168' 00:11:03.894 killing process with pid 3086168 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3086168 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3086168 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.894 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:06.426 00:11:06.426 real 0m7.292s 00:11:06.426 user 0m15.578s 00:11:06.426 sys 0m3.831s 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:06.426 ************************************ 00:11:06.426 END TEST nvmf_bdev_io_wait 00:11:06.426 ************************************ 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.426 ************************************ 00:11:06.426 START TEST nvmf_queue_depth 00:11:06.426 ************************************ 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:06.426 * Looking for test storage... 00:11:06.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.426 --rc genhtml_branch_coverage=1 00:11:06.426 --rc genhtml_function_coverage=1 00:11:06.426 --rc genhtml_legend=1 00:11:06.426 --rc geninfo_all_blocks=1 00:11:06.426 --rc geninfo_unexecuted_blocks=1 00:11:06.426 00:11:06.426 ' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.426 --rc genhtml_branch_coverage=1 00:11:06.426 --rc genhtml_function_coverage=1 00:11:06.426 --rc genhtml_legend=1 00:11:06.426 --rc geninfo_all_blocks=1 00:11:06.426 --rc geninfo_unexecuted_blocks=1 00:11:06.426 00:11:06.426 ' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.426 --rc genhtml_branch_coverage=1 00:11:06.426 --rc genhtml_function_coverage=1 00:11:06.426 --rc genhtml_legend=1 00:11:06.426 --rc geninfo_all_blocks=1 00:11:06.426 --rc geninfo_unexecuted_blocks=1 00:11:06.426 00:11:06.426 ' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.426 --rc genhtml_branch_coverage=1 00:11:06.426 --rc genhtml_function_coverage=1 00:11:06.426 --rc genhtml_legend=1 00:11:06.426 --rc geninfo_all_blocks=1 00:11:06.426 --rc geninfo_unexecuted_blocks=1 00:11:06.426 00:11:06.426 ' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.426 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.427 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.335 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.335 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:08.335 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:08.336 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:08.336 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:08.336 Found net devices under 0000:09:00.0: cvl_0_0 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:08.336 Found net devices under 0000:09:00.1: cvl_0_1 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.336 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.337 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.337 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:08.337 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:08.337 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.337 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.595 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.595 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.595 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:08.595 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.595 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.595 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.595 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:08.595 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:08.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:11:08.596 00:11:08.596 --- 10.0.0.2 ping statistics --- 00:11:08.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.596 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:11:08.596 00:11:08.596 --- 10.0.0.1 ping statistics --- 00:11:08.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.596 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3088510 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3088510 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3088510 ']' 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.596 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.596 [2024-11-25 13:10:06.173041] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:08.596 [2024-11-25 13:10:06.173132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.596 [2024-11-25 13:10:06.251426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.854 [2024-11-25 13:10:06.310355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.854 [2024-11-25 13:10:06.310412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.854 [2024-11-25 13:10:06.310442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.854 [2024-11-25 13:10:06.310453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.854 [2024-11-25 13:10:06.310463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.854 [2024-11-25 13:10:06.311110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.855 [2024-11-25 13:10:06.467281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.855 Malloc0 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:08.855 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 [2024-11-25 13:10:06.516926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3088540 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3088540 /var/tmp/bdevperf.sock 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3088540 ']' 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:09.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.113 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.113 [2024-11-25 13:10:06.569596] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:09.113 [2024-11-25 13:10:06.569689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088540 ] 00:11:09.113 [2024-11-25 13:10:06.638062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.113 [2024-11-25 13:10:06.696376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.371 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.372 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:09.372 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:09.372 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.372 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:09.372 NVMe0n1 00:11:09.372 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.372 13:10:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:09.372 Running I/O for 10 seconds... 00:11:11.676 8192.00 IOPS, 32.00 MiB/s [2024-11-25T12:10:10.269Z] 8377.00 IOPS, 32.72 MiB/s [2024-11-25T12:10:11.201Z] 8449.67 IOPS, 33.01 MiB/s [2024-11-25T12:10:12.136Z] 8442.50 IOPS, 32.98 MiB/s [2024-11-25T12:10:13.068Z] 8418.20 IOPS, 32.88 MiB/s [2024-11-25T12:10:14.444Z] 8496.67 IOPS, 33.19 MiB/s [2024-11-25T12:10:15.377Z] 8481.14 IOPS, 33.13 MiB/s [2024-11-25T12:10:16.312Z] 8504.25 IOPS, 33.22 MiB/s [2024-11-25T12:10:17.245Z] 8527.00 IOPS, 33.31 MiB/s [2024-11-25T12:10:17.245Z] 8535.80 IOPS, 33.34 MiB/s 00:11:19.586 Latency(us) 00:11:19.586 [2024-11-25T12:10:17.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.586 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:19.586 Verification LBA range: start 0x0 length 0x4000 00:11:19.586 NVMe0n1 : 10.08 8567.76 33.47 0.00 0.00 118949.86 19418.07 73011.96 00:11:19.586 [2024-11-25T12:10:17.245Z] =================================================================================================================== 00:11:19.586 [2024-11-25T12:10:17.245Z] Total : 8567.76 33.47 0.00 0.00 118949.86 19418.07 73011.96 00:11:19.586 { 00:11:19.586 "results": [ 00:11:19.586 { 00:11:19.586 "job": "NVMe0n1", 00:11:19.586 "core_mask": "0x1", 00:11:19.586 "workload": "verify", 00:11:19.586 "status": "finished", 00:11:19.586 "verify_range": { 00:11:19.586 "start": 0, 00:11:19.586 "length": 16384 00:11:19.586 }, 00:11:19.586 "queue_depth": 1024, 00:11:19.586 "io_size": 4096, 00:11:19.586 "runtime": 10.07545, 00:11:19.586 "iops": 8567.756278875882, 00:11:19.586 "mibps": 33.467797964358915, 00:11:19.587 "io_failed": 0, 00:11:19.587 "io_timeout": 0, 00:11:19.587 "avg_latency_us": 118949.8571861737, 00:11:19.587 "min_latency_us": 19418.074074074073, 00:11:19.587 "max_latency_us": 73011.95851851851 00:11:19.587 } 00:11:19.587 ], 00:11:19.587 "core_count": 1 00:11:19.587 } 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3088540 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3088540 ']' 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3088540 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3088540 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3088540' 00:11:19.587 killing process with pid 3088540 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3088540 00:11:19.587 Received shutdown signal, test time was about 10.000000 seconds 00:11:19.587 00:11:19.587 Latency(us) 00:11:19.587 [2024-11-25T12:10:17.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.587 [2024-11-25T12:10:17.246Z] =================================================================================================================== 00:11:19.587 [2024-11-25T12:10:17.246Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:19.587 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3088540 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.845 rmmod nvme_tcp 00:11:19.845 rmmod nvme_fabrics 00:11:19.845 rmmod nvme_keyring 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3088510 ']' 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3088510 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3088510 ']' 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3088510 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3088510 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3088510' 00:11:19.845 killing process with pid 3088510 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3088510 00:11:19.845 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3088510 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.105 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.640 00:11:22.640 real 0m16.141s 00:11:22.640 user 0m22.378s 00:11:22.640 sys 0m3.273s 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:22.640 ************************************ 00:11:22.640 END TEST nvmf_queue_depth 00:11:22.640 ************************************ 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.640 ************************************ 00:11:22.640 START TEST nvmf_target_multipath 00:11:22.640 ************************************ 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:22.640 * Looking for test storage... 00:11:22.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.640 --rc genhtml_branch_coverage=1 00:11:22.640 --rc genhtml_function_coverage=1 00:11:22.640 --rc genhtml_legend=1 00:11:22.640 --rc geninfo_all_blocks=1 00:11:22.640 --rc geninfo_unexecuted_blocks=1 00:11:22.640 00:11:22.640 ' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.640 --rc genhtml_branch_coverage=1 00:11:22.640 --rc genhtml_function_coverage=1 00:11:22.640 --rc genhtml_legend=1 00:11:22.640 --rc geninfo_all_blocks=1 00:11:22.640 --rc geninfo_unexecuted_blocks=1 00:11:22.640 00:11:22.640 ' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.640 --rc genhtml_branch_coverage=1 00:11:22.640 --rc genhtml_function_coverage=1 00:11:22.640 --rc genhtml_legend=1 00:11:22.640 --rc geninfo_all_blocks=1 00:11:22.640 --rc geninfo_unexecuted_blocks=1 00:11:22.640 00:11:22.640 ' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.640 --rc genhtml_branch_coverage=1 00:11:22.640 --rc genhtml_function_coverage=1 00:11:22.640 --rc genhtml_legend=1 00:11:22.640 --rc geninfo_all_blocks=1 00:11:22.640 --rc geninfo_unexecuted_blocks=1 00:11:22.640 00:11:22.640 ' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.640 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.641 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:24.545 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:24.545 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.545 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:24.546 Found net devices under 0000:09:00.0: cvl_0_0 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:24.546 Found net devices under 0000:09:00.1: cvl_0_1 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.546 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:11:24.805 00:11:24.805 --- 10.0.0.2 ping statistics --- 00:11:24.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.805 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:24.805 00:11:24.805 --- 10.0.0.1 ping statistics --- 00:11:24.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.805 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:24.805 only one NIC for nvmf test 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:24.805 rmmod nvme_tcp 00:11:24.805 rmmod nvme_fabrics 00:11:24.805 rmmod nvme_keyring 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.805 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.400 00:11:27.400 real 0m4.655s 00:11:27.400 user 0m0.971s 00:11:27.400 sys 0m1.696s 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:27.400 ************************************ 00:11:27.400 END TEST nvmf_target_multipath 00:11:27.400 ************************************ 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.400 ************************************ 00:11:27.400 START TEST nvmf_zcopy 00:11:27.400 ************************************ 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:27.400 * Looking for test storage... 00:11:27.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.400 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:27.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.401 --rc genhtml_branch_coverage=1 00:11:27.401 --rc genhtml_function_coverage=1 00:11:27.401 --rc genhtml_legend=1 00:11:27.401 --rc geninfo_all_blocks=1 00:11:27.401 --rc geninfo_unexecuted_blocks=1 00:11:27.401 00:11:27.401 ' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:27.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.401 --rc genhtml_branch_coverage=1 00:11:27.401 --rc genhtml_function_coverage=1 00:11:27.401 --rc genhtml_legend=1 00:11:27.401 --rc geninfo_all_blocks=1 00:11:27.401 --rc geninfo_unexecuted_blocks=1 00:11:27.401 00:11:27.401 ' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:27.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.401 --rc genhtml_branch_coverage=1 00:11:27.401 --rc genhtml_function_coverage=1 00:11:27.401 --rc genhtml_legend=1 00:11:27.401 --rc geninfo_all_blocks=1 00:11:27.401 --rc geninfo_unexecuted_blocks=1 00:11:27.401 00:11:27.401 ' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:27.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.401 --rc genhtml_branch_coverage=1 00:11:27.401 --rc genhtml_function_coverage=1 00:11:27.401 --rc genhtml_legend=1 00:11:27.401 --rc geninfo_all_blocks=1 00:11:27.401 --rc geninfo_unexecuted_blocks=1 00:11:27.401 00:11:27.401 ' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.401 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.402 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.402 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.402 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.402 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.402 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.402 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.402 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.402 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:29.306 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.306 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:29.307 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:29.307 Found net devices under 0000:09:00.0: cvl_0_0 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:29.307 Found net devices under 0000:09:00.1: cvl_0_1 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.307 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.566 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.566 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.566 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.566 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:11:29.567 00:11:29.567 --- 10.0.0.2 ping statistics --- 00:11:29.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.567 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:11:29.567 00:11:29.567 --- 10.0.0.1 ping statistics --- 00:11:29.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.567 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3093748 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3093748 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3093748 ']' 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.567 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.567 [2024-11-25 13:10:27.108407] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:29.567 [2024-11-25 13:10:27.108498] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.567 [2024-11-25 13:10:27.182812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.831 [2024-11-25 13:10:27.240675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.831 [2024-11-25 13:10:27.240733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.831 [2024-11-25 13:10:27.240770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.831 [2024-11-25 13:10:27.240781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.831 [2024-11-25 13:10:27.240790] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.831 [2024-11-25 13:10:27.241409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.831 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.832 [2024-11-25 13:10:27.380870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.832 [2024-11-25 13:10:27.397054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.832 malloc0 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:29.832 { 00:11:29.832 "params": { 00:11:29.832 "name": "Nvme$subsystem", 00:11:29.832 "trtype": "$TEST_TRANSPORT", 00:11:29.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.832 "adrfam": "ipv4", 00:11:29.832 "trsvcid": "$NVMF_PORT", 00:11:29.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.832 "hdgst": ${hdgst:-false}, 00:11:29.832 "ddgst": ${ddgst:-false} 00:11:29.832 }, 00:11:29.832 "method": "bdev_nvme_attach_controller" 00:11:29.832 } 00:11:29.832 EOF 00:11:29.832 )") 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:29.832 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:29.832 "params": { 00:11:29.832 "name": "Nvme1", 00:11:29.832 "trtype": "tcp", 00:11:29.832 "traddr": "10.0.0.2", 00:11:29.832 "adrfam": "ipv4", 00:11:29.832 "trsvcid": "4420", 00:11:29.832 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.832 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.832 "hdgst": false, 00:11:29.832 "ddgst": false 00:11:29.832 }, 00:11:29.832 "method": "bdev_nvme_attach_controller" 00:11:29.832 }' 00:11:29.832 [2024-11-25 13:10:27.481224] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:29.832 [2024-11-25 13:10:27.481325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093894 ] 00:11:30.156 [2024-11-25 13:10:27.547553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.156 [2024-11-25 13:10:27.606984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.448 Running I/O for 10 seconds... 00:11:32.313 5479.00 IOPS, 42.80 MiB/s [2024-11-25T12:10:30.904Z] 5516.50 IOPS, 43.10 MiB/s [2024-11-25T12:10:32.277Z] 5525.67 IOPS, 43.17 MiB/s [2024-11-25T12:10:33.238Z] 5546.50 IOPS, 43.33 MiB/s [2024-11-25T12:10:34.172Z] 5545.20 IOPS, 43.32 MiB/s [2024-11-25T12:10:35.104Z] 5545.17 IOPS, 43.32 MiB/s [2024-11-25T12:10:36.038Z] 5548.14 IOPS, 43.34 MiB/s [2024-11-25T12:10:36.970Z] 5561.38 IOPS, 43.45 MiB/s [2024-11-25T12:10:38.345Z] 5560.00 IOPS, 43.44 MiB/s [2024-11-25T12:10:38.345Z] 5558.10 IOPS, 43.42 MiB/s 00:11:40.686 Latency(us) 00:11:40.686 [2024-11-25T12:10:38.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.686 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:40.686 Verification LBA range: start 0x0 length 0x1000 00:11:40.686 Nvme1n1 : 10.01 5562.89 43.46 0.00 0.00 22951.45 3422.44 31457.28 00:11:40.686 [2024-11-25T12:10:38.345Z] =================================================================================================================== 00:11:40.686 [2024-11-25T12:10:38.345Z] Total : 5562.89 43.46 0.00 0.00 22951.45 3422.44 31457.28 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3095096 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:40.686 { 00:11:40.686 "params": { 00:11:40.686 "name": "Nvme$subsystem", 00:11:40.686 "trtype": "$TEST_TRANSPORT", 00:11:40.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:40.686 "adrfam": "ipv4", 00:11:40.686 "trsvcid": "$NVMF_PORT", 00:11:40.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:40.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:40.686 "hdgst": ${hdgst:-false}, 00:11:40.686 "ddgst": ${ddgst:-false} 00:11:40.686 }, 00:11:40.686 "method": "bdev_nvme_attach_controller" 00:11:40.686 } 00:11:40.686 EOF 00:11:40.686 )") 00:11:40.686 [2024-11-25 13:10:38.149832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.149869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:40.686 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:40.686 "params": { 00:11:40.686 "name": "Nvme1", 00:11:40.686 "trtype": "tcp", 00:11:40.686 "traddr": "10.0.0.2", 00:11:40.686 "adrfam": "ipv4", 00:11:40.686 "trsvcid": "4420", 00:11:40.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:40.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:40.686 "hdgst": false, 00:11:40.686 "ddgst": false 00:11:40.686 }, 00:11:40.686 "method": "bdev_nvme_attach_controller" 00:11:40.686 }' 00:11:40.686 [2024-11-25 13:10:38.157800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.157823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.165821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.165842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.173840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.173860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.181862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.181882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.189882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.189901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.191558] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:40.686 [2024-11-25 13:10:38.191633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095096 ] 00:11:40.686 [2024-11-25 13:10:38.197901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.197921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.205925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.205945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.213945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.213964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.221966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.221985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.229989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.230009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.238008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.238027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.246031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.246051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.254055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.254082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.260536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.686 [2024-11-25 13:10:38.262075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.262095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.270125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.270155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.278144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.278177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.286140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.286161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.294162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.294182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.302184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.302204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.310206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.310226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.686 [2024-11-25 13:10:38.318226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.686 [2024-11-25 13:10:38.318247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.687 [2024-11-25 13:10:38.324197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.687 [2024-11-25 13:10:38.326247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.687 [2024-11-25 13:10:38.326267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.687 [2024-11-25 13:10:38.334272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.687 [2024-11-25 13:10:38.334315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.687 [2024-11-25 13:10:38.342340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.687 [2024-11-25 13:10:38.342372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.350389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.350422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.358412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.358446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.366417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.366451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.374417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.374450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.382461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.382494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.390468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.390502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.398462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.398492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.406510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.406542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.414538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.414570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.422561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.422609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.430548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.430569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.438571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.438606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.446616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.446641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.454637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.454661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.462671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.462694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.470688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.470711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.478708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.478732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.486728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.486752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.494782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.494806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.502768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.502789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.510791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.510811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.518812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.518832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.526854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.526890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.534855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.534877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.542875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.542896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.550896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.550916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.558917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.558937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.566940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.945 [2024-11-25 13:10:38.566960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.945 [2024-11-25 13:10:38.574965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.946 [2024-11-25 13:10:38.574986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.946 [2024-11-25 13:10:38.582989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.946 [2024-11-25 13:10:38.583008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.946 [2024-11-25 13:10:38.591032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.946 [2024-11-25 13:10:38.591057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.946 [2024-11-25 13:10:38.599071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:40.946 [2024-11-25 13:10:38.599108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.946 Running I/O for 5 seconds... 00:11:41.203 [2024-11-25 13:10:38.610531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.203 [2024-11-25 13:10:38.610560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.203 [2024-11-25 13:10:38.620533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.203 [2024-11-25 13:10:38.620561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.203 [2024-11-25 13:10:38.631504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.203 [2024-11-25 13:10:38.631532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.203 [2024-11-25 13:10:38.643926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.203 [2024-11-25 13:10:38.643954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.203 [2024-11-25 13:10:38.653494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.203 [2024-11-25 13:10:38.653522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.664655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.664683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.675091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.675119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.685901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.685929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.698566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.698595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.708529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.708566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.719274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.719309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.731762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.731789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.743363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.743391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.752739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.752767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.763493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.763520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.776457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.776485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.788481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.788509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.797360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.797388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.808779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.808807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.821628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.821656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.833448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.833476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.842420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.842449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.204 [2024-11-25 13:10:38.853487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.204 [2024-11-25 13:10:38.853514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.864384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.864417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.876898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.876926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.887058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.887085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.897803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.897831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.908876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.908903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.919448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.919475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.931804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.931832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.942285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.942320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.953498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.953526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.964199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.964226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.975386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.975413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.987754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.987781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:38.997481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:38.997509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.008033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.008061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.018745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.018773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.031415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.031443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.043276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.043312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.052065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.052093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.063687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.063714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.074326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.074354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.085318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.085346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.095933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.095960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.105990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.106017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.461 [2024-11-25 13:10:39.116558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.461 [2024-11-25 13:10:39.116585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.126698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.126725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.137544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.137571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.150098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.150138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.161827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.161854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.171045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.171072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.182487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.182514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.194929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.194956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.204679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.204706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.215274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.215310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.226246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.226274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.239208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.239236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.251209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.251236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.260060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.260086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.271726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.271753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.284107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.284134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.293728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.718 [2024-11-25 13:10:39.293755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.718 [2024-11-25 13:10:39.304727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.719 [2024-11-25 13:10:39.304755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.719 [2024-11-25 13:10:39.317175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.719 [2024-11-25 13:10:39.317202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.719 [2024-11-25 13:10:39.327160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.719 [2024-11-25 13:10:39.327188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.719 [2024-11-25 13:10:39.337680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.719 [2024-11-25 13:10:39.337707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.719 [2024-11-25 13:10:39.348009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.719 [2024-11-25 13:10:39.348036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.719 [2024-11-25 13:10:39.358504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.719 [2024-11-25 13:10:39.358540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.719 [2024-11-25 13:10:39.368683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.719 [2024-11-25 13:10:39.368710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.379157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.379184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.389603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.389630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.399963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.399990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.410608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.410636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.421069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.421096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.432119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.432147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.445250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.445278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.455608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.455635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.466526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.466553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.478987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.479014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.489159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.489186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.499711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.499754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.510572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.510614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.521419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.521447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.532259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.532287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.543009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.543036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.555412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.555440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.564952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.564987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.575827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.575854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.586488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.586516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.598962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.598989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 11840.00 IOPS, 92.50 MiB/s [2024-11-25T12:10:39.634Z] [2024-11-25 13:10:39.608512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.608541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.619392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.619420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:41.975 [2024-11-25 13:10:39.632225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:41.975 [2024-11-25 13:10:39.632254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.232 [2024-11-25 13:10:39.642616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.232 [2024-11-25 13:10:39.642644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.232 [2024-11-25 13:10:39.653591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.232 [2024-11-25 13:10:39.653628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.232 [2024-11-25 13:10:39.666203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.232 [2024-11-25 13:10:39.666231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.675652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.675680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.687619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.687646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.700420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.700449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.710563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.710600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.721341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.721368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.734192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.734220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.744384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.744412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.754927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.754954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.765630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.765658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.776364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.776391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.789250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.789278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.799524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.799551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.810166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.810194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.821204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.821231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.831966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.831993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.844436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.844463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.853906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.853934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.865179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.865206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.875822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.875849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.233 [2024-11-25 13:10:39.886511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.233 [2024-11-25 13:10:39.886538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.490 [2024-11-25 13:10:39.899044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.490 [2024-11-25 13:10:39.899072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.490 [2024-11-25 13:10:39.911024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.490 [2024-11-25 13:10:39.911053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.490 [2024-11-25 13:10:39.920275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.490 [2024-11-25 13:10:39.920310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.490 [2024-11-25 13:10:39.932137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.490 [2024-11-25 13:10:39.932164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.490 [2024-11-25 13:10:39.942865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.490 [2024-11-25 13:10:39.942892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.490 [2024-11-25 13:10:39.953625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.490 [2024-11-25 13:10:39.953653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:39.964669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:39.964696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:39.975884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:39.975912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:39.986466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:39.986494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:39.997255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:39.997282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.010186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.010215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.019661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.019688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.031030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.031058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.042004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.042031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.057829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.057861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.068262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.068290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.079502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.079530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.091886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.091913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.101664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.101691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.112726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.112754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.123869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.123897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.136738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.136765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.491 [2024-11-25 13:10:40.147066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.491 [2024-11-25 13:10:40.147093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.157467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.157494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.168119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.168147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.178571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.178599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.189292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.189326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.200168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.200195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.210728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.210754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.223516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.223544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.233804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.233832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.244490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.244517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.255141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.255169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.265695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.265722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.276562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.276589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.288859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.288887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.299109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.299136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.309680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.309707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.320248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.320275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.330950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.330977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.343034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.343061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.353061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.353089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.363588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.363615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.374004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.374032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.384974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.385002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.395415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.395442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:42.749 [2024-11-25 13:10:40.406153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:42.749 [2024-11-25 13:10:40.406179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.416603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.416631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.427534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.427561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.440686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.440713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.450638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.450666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.460983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.461011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.471583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.471611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.482243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.482270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.493041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.493067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.505405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.505432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.515565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.515592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.525946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.525973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.536407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.536434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.546848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.546875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.557714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.557742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.570340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.570367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.580644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.580671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.591199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.591226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.602014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.602049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 11854.00 IOPS, 92.61 MiB/s [2024-11-25T12:10:40.665Z] [2024-11-25 13:10:40.612901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.612928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.625235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.625262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.634760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.006 [2024-11-25 13:10:40.634787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.006 [2024-11-25 13:10:40.646221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.007 [2024-11-25 13:10:40.646249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.007 [2024-11-25 13:10:40.656631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.007 [2024-11-25 13:10:40.656658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.667484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.667511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.680508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.680536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.690805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.690832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.701463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.701491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.714014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.714042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.725831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.725859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.735315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.735352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.746600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.746629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.756813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.756842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.767495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.767524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.779847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.779875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.790014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.790042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.800699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.800727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.813262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.813297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.823482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.823509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.833835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.833862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.843807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.843834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.854256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.854284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.864853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.864880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.877980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.878007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.888387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.888415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.899197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.899224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.911647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.911674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.265 [2024-11-25 13:10:40.921471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.265 [2024-11-25 13:10:40.921498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:40.932142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:40.932184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:40.942734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:40.942761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:40.953482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:40.953510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:40.963965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:40.963991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:40.974516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:40.974543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:40.985034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:40.985061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:40.995653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:40.995681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.006289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.006326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.018602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.018638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.028370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.028397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.039418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.039445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.050333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.050360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.063334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.063362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.073348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.073376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.083914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.083942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.094487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.094514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.105107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.105134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.115678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.115705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.126325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.126359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.137124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.137151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.148222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.148250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.158822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.158850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.524 [2024-11-25 13:10:41.169869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.524 [2024-11-25 13:10:41.169897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.182641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.182669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.194543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.194576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.203496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.203524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.214616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.214643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.226881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.226908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.237146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.237173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.247980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.248007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.260650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.260678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.270845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.270887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.281283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.281319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.292000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.292028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.304856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.304899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.316743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.316770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.325991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.326019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.337883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.337910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.348395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.348423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.359217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.359244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.369721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.369748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.380187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.380214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.390805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.390832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.401278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.401314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.412167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.412194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.424943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.424985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:43.783 [2024-11-25 13:10:41.435511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:43.783 [2024-11-25 13:10:41.435539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.446214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.446241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.458509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.458537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.468611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.468638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.479001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.479028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.489362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.489389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.499691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.499719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.510509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.510536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.523778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.523805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.533835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.533862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.544353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.544381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.554960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.554987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.565532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.565559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.576116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.576144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.586962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.586990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.599812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.599840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 11901.00 IOPS, 92.98 MiB/s [2024-11-25T12:10:41.702Z] [2024-11-25 13:10:41.610166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.610193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.620751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.620778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.631206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.631241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.642101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.642144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.654599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.654626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.664786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.664813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.675250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.675277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.685989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.686016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.043 [2024-11-25 13:10:41.696702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.043 [2024-11-25 13:10:41.696730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.707517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.707544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.720110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.720137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.730187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.730214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.740667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.740695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.751734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.751762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.762128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.762156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.772952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.772979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.783917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.783944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.796752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.796779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.807162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.807190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.817462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.817490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.827835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.827862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.838540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.838576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.851118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.851146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.861418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.861446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.871785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.871812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.882396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.882424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.894504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.894532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.904495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.904523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.915344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.915378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.927871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.927900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.936760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.936788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.302 [2024-11-25 13:10:41.949859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.302 [2024-11-25 13:10:41.949887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:41.960421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:41.960449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:41.971513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:41.971541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:41.984326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:41.984354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:41.994336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:41.994363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.004955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.004983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.018461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.018488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.028799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.028826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.039646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.039673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.050612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.050646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.061163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.061191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.073697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.073724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.083813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.083840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.093955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.093981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.104843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.104870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.117645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.117673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.127667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.127694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.138369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.138397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.149480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.149507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.160636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.160664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.172937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.172964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.183246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.183274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.193888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.193915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.205072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.205099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.561 [2024-11-25 13:10:42.215614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.561 [2024-11-25 13:10:42.215641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.227704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.227732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.238101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.238129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.248914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.248942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.261206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.261246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.270000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.270027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.281805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.281833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.292810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.292838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.303054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.303082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.314003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.314031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.324517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.324545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.335560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.335588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.348226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.348253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.358264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.358292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.368987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.369014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.381308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.381337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.391614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.391641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.401857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.401885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.412385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.412413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.422854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.422882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.433487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.433514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.444367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.444394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.455103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.455130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:44.820 [2024-11-25 13:10:42.468444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:44.820 [2024-11-25 13:10:42.468481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.480738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.480766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.490118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.490145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.501894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.501922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.512520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.512547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.523055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.523083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.533591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.533619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.544294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.544333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.554945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.554972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.565858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.565885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.578694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.578721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.588993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.589020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.599786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.599813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.610242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.610267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 11896.75 IOPS, 92.94 MiB/s [2024-11-25T12:10:42.738Z] [2024-11-25 13:10:42.621047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.621074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.633655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.633682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.643561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.643588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.653987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.654014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.664891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.664919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.675553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.675595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.686611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.686639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.697728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.697756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.708275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.708309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.719196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.719223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.079 [2024-11-25 13:10:42.731633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.079 [2024-11-25 13:10:42.731660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.337 [2024-11-25 13:10:42.741858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.741886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.752518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.752546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.763367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.763395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.774226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.774254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.787175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.787202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.796882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.796909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.808243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.808285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.819098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.819125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.829504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.829531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.839956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.839983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.850460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.850487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.861358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.861385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.873861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.873889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.883971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.883998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.894689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.894716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.905561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.905589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.916573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.916601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.929270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.929297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.939273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.939309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.949440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.949468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.959680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.959707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.969954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.969981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.980583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.980611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.338 [2024-11-25 13:10:42.991390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.338 [2024-11-25 13:10:42.991419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.001959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.001987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.012435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.012464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.023502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.023530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.035976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.036003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.046025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.046053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.056552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.056580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.067354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.067382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.077827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.077864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.088710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.088738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.101455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.101482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.110715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.110742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.122027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.122056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.132755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.132782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.143131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.596 [2024-11-25 13:10:43.143158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.596 [2024-11-25 13:10:43.153352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.153379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.164106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.164133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.177021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.177049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.186878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.186906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.197459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.197486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.210173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.210216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.219868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.219895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.230195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.230223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.240850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.240878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.597 [2024-11-25 13:10:43.252933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.597 [2024-11-25 13:10:43.252960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.262328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.262355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.273208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.273236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.283590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.283626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.293933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.293961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.304729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.304756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.317391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.317419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.327851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.327878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.338684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.338712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.351117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.351144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.362786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.362814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.371971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.371999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.383172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.383200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.393868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.393896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.404871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.404898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.417845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.417872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.428711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.428739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.439557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.439584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.452219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.452247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.462350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.462377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.473228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.473256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.485979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.486007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.496481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.496517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:45.855 [2024-11-25 13:10:43.507011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:45.855 [2024-11-25 13:10:43.507040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.517493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.517520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.528294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.528330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.539379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.539407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.549745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.549773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.560605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.560633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.571204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.571231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.582120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.582147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.594790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.594817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.605005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.605033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 11901.40 IOPS, 92.98 MiB/s [2024-11-25T12:10:43.773Z] [2024-11-25 13:10:43.615580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.615607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.623543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.623569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 00:11:46.114 Latency(us) 00:11:46.114 [2024-11-25T12:10:43.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.114 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:46.114 Nvme1n1 : 5.01 11903.13 92.99 0.00 0.00 10740.30 4466.16 19126.80 00:11:46.114 [2024-11-25T12:10:43.773Z] =================================================================================================================== 00:11:46.114 [2024-11-25T12:10:43.773Z] Total : 11903.13 92.99 0.00 0.00 10740.30 4466.16 19126.80 00:11:46.114 [2024-11-25 13:10:43.627941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.627963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.635971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.635997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.643980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.644001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.114 [2024-11-25 13:10:43.652057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.114 [2024-11-25 13:10:43.652100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.660081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.660124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.668097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.668139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.676118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.676159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.684136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.684177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.692164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.692205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.700184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.700226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.708202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.708244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.716226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.716269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.724247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.724290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.732277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.732329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.740300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.740350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.748324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.748364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.756344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.756386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.115 [2024-11-25 13:10:43.764363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.115 [2024-11-25 13:10:43.764404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.772379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.772417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.780399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.780422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.788385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.788408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.796411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.796434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.804448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.804470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.812506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.812545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.820532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.820574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.828548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.828586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.836526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.836548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.373 [2024-11-25 13:10:43.844545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.373 [2024-11-25 13:10:43.844567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.374 [2024-11-25 13:10:43.852570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:46.374 [2024-11-25 13:10:43.852592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:46.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3095096) - No such process 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3095096 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.374 delay0 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.374 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:46.374 [2024-11-25 13:10:43.938165] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:52.932 [2024-11-25 13:10:50.003651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d84a0 is same with the state(6) to be set 00:11:52.932 Initializing NVMe Controllers 00:11:52.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:52.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:52.932 Initialization complete. Launching workers. 00:11:52.932 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 111 00:11:52.932 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 398, failed to submit 33 00:11:52.932 success 255, unsuccessful 143, failed 0 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.932 rmmod nvme_tcp 00:11:52.932 rmmod nvme_fabrics 00:11:52.932 rmmod nvme_keyring 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3093748 ']' 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3093748 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3093748 ']' 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3093748 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093748 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093748' 00:11:52.932 killing process with pid 3093748 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3093748 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3093748 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.932 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.839 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:54.839 00:11:54.839 real 0m27.878s 00:11:54.839 user 0m41.127s 00:11:54.839 sys 0m8.153s 00:11:54.839 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.839 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:54.839 ************************************ 00:11:54.839 END TEST nvmf_zcopy 00:11:54.839 ************************************ 00:11:54.839 13:10:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:54.839 13:10:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.839 13:10:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.839 13:10:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:54.839 ************************************ 00:11:54.839 START TEST nvmf_nmic 00:11:54.839 ************************************ 00:11:54.839 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:55.100 * Looking for test storage... 00:11:55.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:55.100 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:55.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.101 --rc genhtml_branch_coverage=1 00:11:55.101 --rc genhtml_function_coverage=1 00:11:55.101 --rc genhtml_legend=1 00:11:55.101 --rc geninfo_all_blocks=1 00:11:55.101 --rc geninfo_unexecuted_blocks=1 00:11:55.101 00:11:55.101 ' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:55.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.101 --rc genhtml_branch_coverage=1 00:11:55.101 --rc genhtml_function_coverage=1 00:11:55.101 --rc genhtml_legend=1 00:11:55.101 --rc geninfo_all_blocks=1 00:11:55.101 --rc geninfo_unexecuted_blocks=1 00:11:55.101 00:11:55.101 ' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:55.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.101 --rc genhtml_branch_coverage=1 00:11:55.101 --rc genhtml_function_coverage=1 00:11:55.101 --rc genhtml_legend=1 00:11:55.101 --rc geninfo_all_blocks=1 00:11:55.101 --rc geninfo_unexecuted_blocks=1 00:11:55.101 00:11:55.101 ' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:55.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.101 --rc genhtml_branch_coverage=1 00:11:55.101 --rc genhtml_function_coverage=1 00:11:55.101 --rc genhtml_legend=1 00:11:55.101 --rc geninfo_all_blocks=1 00:11:55.101 --rc geninfo_unexecuted_blocks=1 00:11:55.101 00:11:55.101 ' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.101 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:57.636 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:57.636 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.636 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:57.637 Found net devices under 0000:09:00.0: cvl_0_0 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:57.637 Found net devices under 0000:09:00.1: cvl_0_1 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:11:57.637 00:11:57.637 --- 10.0.0.2 ping statistics --- 00:11:57.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.637 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:11:57.637 00:11:57.637 --- 10.0.0.1 ping statistics --- 00:11:57.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.637 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3098492 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3098492 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3098492 ']' 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.637 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.637 [2024-11-25 13:10:54.943169] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:11:57.637 [2024-11-25 13:10:54.943257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.637 [2024-11-25 13:10:55.018408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.637 [2024-11-25 13:10:55.079964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.637 [2024-11-25 13:10:55.080017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.637 [2024-11-25 13:10:55.080046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.637 [2024-11-25 13:10:55.080057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.637 [2024-11-25 13:10:55.080066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.637 [2024-11-25 13:10:55.081727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.637 [2024-11-25 13:10:55.081794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.637 [2024-11-25 13:10:55.081861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.637 [2024-11-25 13:10:55.081864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.637 [2024-11-25 13:10:55.241769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:57.637 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.638 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.896 Malloc0 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.896 [2024-11-25 13:10:55.314027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:57.896 test case1: single bdev can't be used in multiple subsystems 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.896 [2024-11-25 13:10:55.337877] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:57.896 [2024-11-25 13:10:55.337906] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:57.896 [2024-11-25 13:10:55.337935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:57.896 request: 00:11:57.896 { 00:11:57.896 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:57.896 "namespace": { 00:11:57.896 "bdev_name": "Malloc0", 00:11:57.896 "no_auto_visible": false 00:11:57.896 }, 00:11:57.896 "method": "nvmf_subsystem_add_ns", 00:11:57.896 "req_id": 1 00:11:57.896 } 00:11:57.896 Got JSON-RPC error response 00:11:57.896 response: 00:11:57.896 { 00:11:57.896 "code": -32602, 00:11:57.896 "message": "Invalid parameters" 00:11:57.896 } 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:57.896 Adding namespace failed - expected result. 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:57.896 test case2: host connect to nvmf target in multiple paths 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:57.896 [2024-11-25 13:10:55.345982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.896 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.461 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:59.025 13:10:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.025 13:10:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:59.025 13:10:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.025 13:10:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:59.025 13:10:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:00.983 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:00.983 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:00.983 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.983 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:00.983 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.983 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:00.983 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:01.243 [global] 00:12:01.243 thread=1 00:12:01.243 invalidate=1 00:12:01.243 rw=write 00:12:01.243 time_based=1 00:12:01.243 runtime=1 00:12:01.243 ioengine=libaio 00:12:01.243 direct=1 00:12:01.243 bs=4096 00:12:01.243 iodepth=1 00:12:01.243 norandommap=0 00:12:01.243 numjobs=1 00:12:01.243 00:12:01.243 verify_dump=1 00:12:01.243 verify_backlog=512 00:12:01.243 verify_state_save=0 00:12:01.243 do_verify=1 00:12:01.243 verify=crc32c-intel 00:12:01.243 [job0] 00:12:01.243 filename=/dev/nvme0n1 00:12:01.243 Could not set queue depth (nvme0n1) 00:12:01.243 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:01.243 fio-3.35 00:12:01.243 Starting 1 thread 00:12:02.615 00:12:02.615 job0: (groupid=0, jobs=1): err= 0: pid=3099016: Mon Nov 25 13:10:59 2024 00:12:02.615 read: IOPS=1692, BW=6769KiB/s (6932kB/s)(6776KiB/1001msec) 00:12:02.615 slat (nsec): min=5511, max=48011, avg=13145.62, stdev=5955.12 00:12:02.615 clat (usec): min=202, max=41030, avg=308.42, stdev=1399.82 00:12:02.615 lat (usec): min=210, max=41049, avg=321.56, stdev=1399.92 00:12:02.615 clat percentiles (usec): 00:12:02.615 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 235], 00:12:02.615 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:12:02.615 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:12:02.615 | 99.00th=[ 474], 99.50th=[ 611], 99.90th=[40633], 99.95th=[41157], 00:12:02.615 | 99.99th=[41157] 00:12:02.615 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:02.615 slat (usec): min=7, max=29045, avg=31.54, stdev=641.49 00:12:02.615 clat (usec): min=124, max=343, avg=182.75, stdev=31.76 00:12:02.615 lat (usec): min=134, max=29302, avg=214.29, stdev=644.20 00:12:02.615 clat percentiles (usec): 00:12:02.615 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 153], 00:12:02.615 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 186], 60.00th=[ 192], 00:12:02.615 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 233], 00:12:02.615 | 99.00th=[ 285], 99.50th=[ 302], 99.90th=[ 330], 99.95th=[ 343], 00:12:02.615 | 99.99th=[ 343] 00:12:02.615 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:12:02.615 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:02.615 lat (usec) : 250=71.25%, 500=28.35%, 750=0.32% 00:12:02.615 lat (msec) : 4=0.03%, 50=0.05% 00:12:02.615 cpu : usr=4.00%, sys=7.70%, ctx=3745, majf=0, minf=1 00:12:02.615 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.615 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.615 issued rwts: total=1694,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.615 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.615 00:12:02.615 Run status group 0 (all jobs): 00:12:02.615 READ: bw=6769KiB/s (6932kB/s), 6769KiB/s-6769KiB/s (6932kB/s-6932kB/s), io=6776KiB (6939kB), run=1001-1001msec 00:12:02.615 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:12:02.615 00:12:02.615 Disk stats (read/write): 00:12:02.615 nvme0n1: ios=1562/1814, merge=0/0, ticks=1452/300, in_queue=1752, util=98.70% 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.615 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.615 rmmod nvme_tcp 00:12:02.615 rmmod nvme_fabrics 00:12:02.615 rmmod nvme_keyring 00:12:02.873 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.873 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:02.873 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:02.873 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3098492 ']' 00:12:02.873 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3098492 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3098492 ']' 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3098492 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3098492 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3098492' 00:12:02.874 killing process with pid 3098492 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3098492 00:12:02.874 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3098492 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.132 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.036 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.036 00:12:05.036 real 0m10.185s 00:12:05.036 user 0m22.946s 00:12:05.036 sys 0m2.571s 00:12:05.036 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.036 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:05.036 ************************************ 00:12:05.036 END TEST nvmf_nmic 00:12:05.036 ************************************ 00:12:05.036 13:11:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:05.036 13:11:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.036 13:11:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.036 13:11:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:05.295 ************************************ 00:12:05.295 START TEST nvmf_fio_target 00:12:05.295 ************************************ 00:12:05.295 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:05.295 * Looking for test storage... 00:12:05.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.296 --rc genhtml_branch_coverage=1 00:12:05.296 --rc genhtml_function_coverage=1 00:12:05.296 --rc genhtml_legend=1 00:12:05.296 --rc geninfo_all_blocks=1 00:12:05.296 --rc geninfo_unexecuted_blocks=1 00:12:05.296 00:12:05.296 ' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.296 --rc genhtml_branch_coverage=1 00:12:05.296 --rc genhtml_function_coverage=1 00:12:05.296 --rc genhtml_legend=1 00:12:05.296 --rc geninfo_all_blocks=1 00:12:05.296 --rc geninfo_unexecuted_blocks=1 00:12:05.296 00:12:05.296 ' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.296 --rc genhtml_branch_coverage=1 00:12:05.296 --rc genhtml_function_coverage=1 00:12:05.296 --rc genhtml_legend=1 00:12:05.296 --rc geninfo_all_blocks=1 00:12:05.296 --rc geninfo_unexecuted_blocks=1 00:12:05.296 00:12:05.296 ' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.296 --rc genhtml_branch_coverage=1 00:12:05.296 --rc genhtml_function_coverage=1 00:12:05.296 --rc genhtml_legend=1 00:12:05.296 --rc geninfo_all_blocks=1 00:12:05.296 --rc geninfo_unexecuted_blocks=1 00:12:05.296 00:12:05.296 ' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.296 13:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:07.824 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:07.824 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:07.824 Found net devices under 0000:09:00.0: cvl_0_0 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:07.824 Found net devices under 0000:09:00.1: cvl_0_1 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:12:07.824 00:12:07.824 --- 10.0.0.2 ping statistics --- 00:12:07.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.824 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:12:07.824 00:12:07.824 --- 10.0.0.1 ping statistics --- 00:12:07.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.824 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3101226 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3101226 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3101226 ']' 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.824 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.824 [2024-11-25 13:11:05.281346] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:12:07.824 [2024-11-25 13:11:05.281447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.824 [2024-11-25 13:11:05.355559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.824 [2024-11-25 13:11:05.416468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.824 [2024-11-25 13:11:05.416522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.824 [2024-11-25 13:11:05.416551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.824 [2024-11-25 13:11:05.416563] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.825 [2024-11-25 13:11:05.416572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.825 [2024-11-25 13:11:05.418138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.825 [2024-11-25 13:11:05.418197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.825 [2024-11-25 13:11:05.418265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.825 [2024-11-25 13:11:05.418268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.081 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.081 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:08.081 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.082 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.082 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.082 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.082 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:08.339 [2024-11-25 13:11:05.860032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.339 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.596 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:08.596 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.854 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:08.854 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:09.111 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:09.111 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:09.676 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:09.676 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:09.676 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:10.241 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:10.241 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:10.241 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:10.241 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:10.806 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:10.807 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:10.807 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:11.063 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:11.063 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.320 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:11.320 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.577 13:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.832 [2024-11-25 13:11:09.481670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.090 13:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:12.347 13:11:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:12.605 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.171 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:13.171 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.171 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.171 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:13.171 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:13.171 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.072 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.330 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.330 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.330 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:15.330 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.330 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:15.330 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:15.330 [global] 00:12:15.330 thread=1 00:12:15.330 invalidate=1 00:12:15.330 rw=write 00:12:15.330 time_based=1 00:12:15.330 runtime=1 00:12:15.330 ioengine=libaio 00:12:15.330 direct=1 00:12:15.330 bs=4096 00:12:15.330 iodepth=1 00:12:15.330 norandommap=0 00:12:15.330 numjobs=1 00:12:15.330 00:12:15.330 verify_dump=1 00:12:15.330 verify_backlog=512 00:12:15.330 verify_state_save=0 00:12:15.330 do_verify=1 00:12:15.330 verify=crc32c-intel 00:12:15.330 [job0] 00:12:15.330 filename=/dev/nvme0n1 00:12:15.330 [job1] 00:12:15.330 filename=/dev/nvme0n2 00:12:15.330 [job2] 00:12:15.330 filename=/dev/nvme0n3 00:12:15.330 [job3] 00:12:15.330 filename=/dev/nvme0n4 00:12:15.330 Could not set queue depth (nvme0n1) 00:12:15.330 Could not set queue depth (nvme0n2) 00:12:15.330 Could not set queue depth (nvme0n3) 00:12:15.330 Could not set queue depth (nvme0n4) 00:12:15.330 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.330 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.330 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.330 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:15.330 fio-3.35 00:12:15.330 Starting 4 threads 00:12:16.704 00:12:16.704 job0: (groupid=0, jobs=1): err= 0: pid=3102303: Mon Nov 25 13:11:14 2024 00:12:16.704 read: IOPS=579, BW=2318KiB/s (2374kB/s)(2360KiB/1018msec) 00:12:16.704 slat (nsec): min=4227, max=33853, avg=6855.57, stdev=4923.50 00:12:16.704 clat (usec): min=185, max=42042, avg=1342.15, stdev=6705.84 00:12:16.704 lat (usec): min=191, max=42048, avg=1349.01, stdev=6707.09 00:12:16.704 clat percentiles (usec): 00:12:16.704 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:12:16.704 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:12:16.704 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 289], 00:12:16.704 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:16.704 | 99.99th=[42206] 00:12:16.704 write: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec); 0 zone resets 00:12:16.704 slat (nsec): min=6107, max=37418, avg=13053.74, stdev=5628.67 00:12:16.704 clat (usec): min=136, max=621, avg=199.13, stdev=36.70 00:12:16.704 lat (usec): min=145, max=639, avg=212.18, stdev=35.70 00:12:16.704 clat percentiles (usec): 00:12:16.704 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 169], 00:12:16.704 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 196], 60.00th=[ 208], 00:12:16.704 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 255], 00:12:16.704 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 400], 99.95th=[ 619], 00:12:16.704 | 99.99th=[ 619] 00:12:16.704 bw ( KiB/s): min= 8192, max= 8192, per=46.15%, avg=8192.00, stdev= 0.00, samples=1 00:12:16.704 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:16.704 lat (usec) : 250=93.00%, 500=5.89%, 750=0.12% 00:12:16.704 lat (msec) : 50=0.99% 00:12:16.704 cpu : usr=1.47%, sys=1.08%, ctx=1615, majf=0, minf=1 00:12:16.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.704 issued rwts: total=590,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.704 job1: (groupid=0, jobs=1): err= 0: pid=3102304: Mon Nov 25 13:11:14 2024 00:12:16.704 read: IOPS=415, BW=1662KiB/s (1702kB/s)(1692KiB/1018msec) 00:12:16.704 slat (nsec): min=5548, max=42584, avg=13680.91, stdev=7826.97 00:12:16.704 clat (usec): min=173, max=41926, avg=2116.82, stdev=8459.19 00:12:16.704 lat (usec): min=179, max=41948, avg=2130.50, stdev=8461.57 00:12:16.704 clat percentiles (usec): 00:12:16.704 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:12:16.704 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 265], 60.00th=[ 334], 00:12:16.704 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 603], 00:12:16.704 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:12:16.704 | 99.99th=[41681] 00:12:16.704 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:12:16.704 slat (nsec): min=7729, max=40093, avg=11900.90, stdev=6277.54 00:12:16.704 clat (usec): min=148, max=427, avg=207.42, stdev=26.34 00:12:16.704 lat (usec): min=157, max=438, avg=219.32, stdev=26.59 00:12:16.704 clat percentiles (usec): 00:12:16.704 | 1.00th=[ 155], 5.00th=[ 165], 10.00th=[ 176], 20.00th=[ 194], 00:12:16.704 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 212], 00:12:16.705 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 237], 00:12:16.705 | 99.00th=[ 273], 99.50th=[ 383], 99.90th=[ 429], 99.95th=[ 429], 00:12:16.705 | 99.99th=[ 429] 00:12:16.705 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.705 lat (usec) : 250=75.29%, 500=21.60%, 750=1.07% 00:12:16.705 lat (msec) : 50=2.03% 00:12:16.705 cpu : usr=0.69%, sys=1.67%, ctx=935, majf=0, minf=1 00:12:16.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.705 issued rwts: total=423,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.705 job2: (groupid=0, jobs=1): err= 0: pid=3102305: Mon Nov 25 13:11:14 2024 00:12:16.705 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:16.705 slat (nsec): min=4387, max=54583, avg=11874.66, stdev=7290.46 00:12:16.705 clat (usec): min=173, max=975, avg=237.51, stdev=67.27 00:12:16.705 lat (usec): min=182, max=1009, avg=249.39, stdev=70.37 00:12:16.705 clat percentiles (usec): 00:12:16.705 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:12:16.705 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 231], 00:12:16.705 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 277], 95.00th=[ 412], 00:12:16.705 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 611], 99.95th=[ 644], 00:12:16.705 | 99.99th=[ 979] 00:12:16.705 write: IOPS=2520, BW=9.84MiB/s (10.3MB/s)(9.86MiB/1001msec); 0 zone resets 00:12:16.705 slat (nsec): min=5529, max=61898, avg=13034.11, stdev=5491.37 00:12:16.705 clat (usec): min=126, max=509, avg=174.60, stdev=39.16 00:12:16.705 lat (usec): min=134, max=518, avg=187.63, stdev=39.47 00:12:16.705 clat percentiles (usec): 00:12:16.705 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:12:16.705 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 178], 00:12:16.705 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 217], 95.00th=[ 260], 00:12:16.705 | 99.00th=[ 314], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 461], 00:12:16.705 | 99.99th=[ 510] 00:12:16.705 bw ( KiB/s): min= 9280, max= 9280, per=52.28%, avg=9280.00, stdev= 0.00, samples=1 00:12:16.705 iops : min= 2320, max= 2320, avg=2320.00, stdev= 0.00, samples=1 00:12:16.705 lat (usec) : 250=87.53%, 500=11.84%, 750=0.61%, 1000=0.02% 00:12:16.705 cpu : usr=2.80%, sys=6.30%, ctx=4571, majf=0, minf=2 00:12:16.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.705 issued rwts: total=2048,2523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.705 job3: (groupid=0, jobs=1): err= 0: pid=3102306: Mon Nov 25 13:11:14 2024 00:12:16.705 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:12:16.705 slat (nsec): min=7758, max=35059, avg=26277.41, stdev=9702.36 00:12:16.705 clat (usec): min=40422, max=41101, avg=40941.10, stdev=126.60 00:12:16.705 lat (usec): min=40430, max=41119, avg=40967.38, stdev=129.64 00:12:16.705 clat percentiles (usec): 00:12:16.705 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:16.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:16.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:16.705 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:16.705 | 99.99th=[41157] 00:12:16.705 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:12:16.705 slat (nsec): min=6716, max=42564, avg=11944.81, stdev=6545.23 00:12:16.705 clat (usec): min=160, max=411, avg=235.42, stdev=52.38 00:12:16.705 lat (usec): min=167, max=431, avg=247.36, stdev=53.06 00:12:16.705 clat percentiles (usec): 00:12:16.705 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 196], 00:12:16.705 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 235], 00:12:16.705 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 367], 00:12:16.705 | 99.00th=[ 392], 99.50th=[ 392], 99.90th=[ 412], 99.95th=[ 412], 00:12:16.705 | 99.99th=[ 412] 00:12:16.705 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:12:16.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:16.705 lat (usec) : 250=65.92%, 500=29.96% 00:12:16.705 lat (msec) : 50=4.12% 00:12:16.705 cpu : usr=0.58%, sys=0.29%, ctx=534, majf=0, minf=1 00:12:16.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:16.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:16.705 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:16.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:16.705 00:12:16.705 Run status group 0 (all jobs): 00:12:16.705 READ: bw=11.7MiB/s (12.3MB/s), 85.4KiB/s-8184KiB/s (87.5kB/s-8380kB/s), io=12.0MiB (12.6MB), run=1001-1030msec 00:12:16.705 WRITE: bw=17.3MiB/s (18.2MB/s), 1988KiB/s-9.84MiB/s (2036kB/s-10.3MB/s), io=17.9MiB (18.7MB), run=1001-1030msec 00:12:16.705 00:12:16.705 Disk stats (read/write): 00:12:16.705 nvme0n1: ios=626/1024, merge=0/0, ticks=601/201, in_queue=802, util=86.07% 00:12:16.705 nvme0n2: ios=468/512, merge=0/0, ticks=756/107, in_queue=863, util=90.21% 00:12:16.705 nvme0n3: ios=1827/2048, merge=0/0, ticks=484/348, in_queue=832, util=94.43% 00:12:16.705 nvme0n4: ios=74/512, merge=0/0, ticks=768/122, in_queue=890, util=95.54% 00:12:16.705 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:16.705 [global] 00:12:16.705 thread=1 00:12:16.705 invalidate=1 00:12:16.705 rw=randwrite 00:12:16.705 time_based=1 00:12:16.705 runtime=1 00:12:16.705 ioengine=libaio 00:12:16.705 direct=1 00:12:16.705 bs=4096 00:12:16.705 iodepth=1 00:12:16.705 norandommap=0 00:12:16.705 numjobs=1 00:12:16.705 00:12:16.705 verify_dump=1 00:12:16.705 verify_backlog=512 00:12:16.705 verify_state_save=0 00:12:16.705 do_verify=1 00:12:16.705 verify=crc32c-intel 00:12:16.705 [job0] 00:12:16.705 filename=/dev/nvme0n1 00:12:16.705 [job1] 00:12:16.705 filename=/dev/nvme0n2 00:12:16.705 [job2] 00:12:16.705 filename=/dev/nvme0n3 00:12:16.705 [job3] 00:12:16.705 filename=/dev/nvme0n4 00:12:16.705 Could not set queue depth (nvme0n1) 00:12:16.705 Could not set queue depth (nvme0n2) 00:12:16.705 Could not set queue depth (nvme0n3) 00:12:16.705 Could not set queue depth (nvme0n4) 00:12:16.963 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:16.963 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:16.963 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:16.963 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:16.963 fio-3.35 00:12:16.963 Starting 4 threads 00:12:18.336 00:12:18.336 job0: (groupid=0, jobs=1): err= 0: pid=3102545: Mon Nov 25 13:11:15 2024 00:12:18.336 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:18.336 slat (nsec): min=5544, max=67333, avg=13563.18, stdev=6291.56 00:12:18.336 clat (usec): min=194, max=41189, avg=686.28, stdev=3998.83 00:12:18.336 lat (usec): min=200, max=41256, avg=699.85, stdev=3999.54 00:12:18.336 clat percentiles (usec): 00:12:18.336 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 241], 00:12:18.336 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:12:18.336 | 70.00th=[ 277], 80.00th=[ 355], 90.00th=[ 429], 95.00th=[ 486], 00:12:18.336 | 99.00th=[ 1205], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:18.336 | 99.99th=[41157] 00:12:18.336 write: IOPS=1274, BW=5099KiB/s (5221kB/s)(5104KiB/1001msec); 0 zone resets 00:12:18.336 slat (nsec): min=7162, max=51523, avg=16590.30, stdev=6377.42 00:12:18.336 clat (usec): min=144, max=410, avg=196.61, stdev=23.44 00:12:18.336 lat (usec): min=155, max=418, avg=213.20, stdev=22.42 00:12:18.336 clat percentiles (usec): 00:12:18.336 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 180], 00:12:18.336 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:12:18.336 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 237], 00:12:18.336 | 99.00th=[ 258], 99.50th=[ 273], 99.90th=[ 351], 99.95th=[ 412], 00:12:18.336 | 99.99th=[ 412] 00:12:18.336 bw ( KiB/s): min= 4096, max= 4096, per=19.51%, avg=4096.00, stdev= 0.00, samples=1 00:12:18.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:18.336 lat (usec) : 250=70.35%, 500=27.96%, 750=1.13%, 1000=0.09% 00:12:18.336 lat (msec) : 2=0.04%, 50=0.43% 00:12:18.336 cpu : usr=2.10%, sys=5.40%, ctx=2300, majf=0, minf=2 00:12:18.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.336 issued rwts: total=1024,1276,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.336 job1: (groupid=0, jobs=1): err= 0: pid=3102546: Mon Nov 25 13:11:15 2024 00:12:18.336 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:18.336 slat (nsec): min=5050, max=78992, avg=12478.90, stdev=8395.27 00:12:18.336 clat (usec): min=177, max=41111, avg=1697.76, stdev=7494.09 00:12:18.336 lat (usec): min=187, max=41129, avg=1710.24, stdev=7496.40 00:12:18.336 clat percentiles (usec): 00:12:18.336 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 212], 00:12:18.336 | 30.00th=[ 223], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 269], 00:12:18.336 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 359], 95.00th=[ 404], 00:12:18.336 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:18.336 | 99.99th=[41157] 00:12:18.336 write: IOPS=530, BW=2122KiB/s (2173kB/s)(2124KiB/1001msec); 0 zone resets 00:12:18.336 slat (nsec): min=5829, max=38808, avg=11737.59, stdev=5535.98 00:12:18.336 clat (usec): min=148, max=439, avg=215.26, stdev=23.08 00:12:18.336 lat (usec): min=163, max=446, avg=227.00, stdev=21.80 00:12:18.336 clat percentiles (usec): 00:12:18.336 | 1.00th=[ 161], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 200], 00:12:18.336 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 219], 00:12:18.336 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 243], 00:12:18.336 | 99.00th=[ 269], 99.50th=[ 359], 99.90th=[ 441], 99.95th=[ 441], 00:12:18.336 | 99.99th=[ 441] 00:12:18.336 bw ( KiB/s): min= 4096, max= 4096, per=19.51%, avg=4096.00, stdev= 0.00, samples=1 00:12:18.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:18.336 lat (usec) : 250=73.73%, 500=24.45% 00:12:18.336 lat (msec) : 10=0.10%, 50=1.73% 00:12:18.336 cpu : usr=0.60%, sys=1.30%, ctx=1044, majf=0, minf=1 00:12:18.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.336 issued rwts: total=512,531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.336 job2: (groupid=0, jobs=1): err= 0: pid=3102547: Mon Nov 25 13:11:15 2024 00:12:18.336 read: IOPS=1160, BW=4643KiB/s (4754kB/s)(4768KiB/1027msec) 00:12:18.336 slat (nsec): min=5804, max=51901, avg=12951.42, stdev=5862.55 00:12:18.336 clat (usec): min=204, max=41125, avg=560.40, stdev=3517.61 00:12:18.336 lat (usec): min=212, max=41134, avg=573.35, stdev=3518.02 00:12:18.336 clat percentiles (usec): 00:12:18.336 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 235], 00:12:18.336 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:12:18.336 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:12:18.336 | 99.00th=[ 553], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:12:18.336 | 99.99th=[41157] 00:12:18.336 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:12:18.336 slat (nsec): min=7794, max=54146, avg=17942.00, stdev=7069.18 00:12:18.336 clat (usec): min=142, max=396, avg=196.93, stdev=33.77 00:12:18.336 lat (usec): min=151, max=425, avg=214.87, stdev=35.49 00:12:18.336 clat percentiles (usec): 00:12:18.336 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:12:18.336 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 192], 00:12:18.336 | 70.00th=[ 200], 80.00th=[ 225], 90.00th=[ 258], 95.00th=[ 265], 00:12:18.336 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 351], 99.95th=[ 396], 00:12:18.336 | 99.99th=[ 396] 00:12:18.336 bw ( KiB/s): min= 4096, max= 8192, per=29.26%, avg=6144.00, stdev=2896.31, samples=2 00:12:18.336 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:12:18.336 lat (usec) : 250=71.85%, 500=27.53%, 750=0.29% 00:12:18.336 lat (msec) : 50=0.33% 00:12:18.336 cpu : usr=2.83%, sys=5.75%, ctx=2731, majf=0, minf=1 00:12:18.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.336 issued rwts: total=1192,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.336 job3: (groupid=0, jobs=1): err= 0: pid=3102548: Mon Nov 25 13:11:15 2024 00:12:18.336 read: IOPS=1858, BW=7433KiB/s (7611kB/s)(7440KiB/1001msec) 00:12:18.336 slat (nsec): min=6022, max=52188, avg=13814.44, stdev=5759.61 00:12:18.336 clat (usec): min=193, max=40483, avg=280.11, stdev=933.47 00:12:18.336 lat (usec): min=200, max=40490, avg=293.92, stdev=933.40 00:12:18.336 clat percentiles (usec): 00:12:18.336 | 1.00th=[ 208], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 237], 00:12:18.336 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:12:18.336 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:12:18.336 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 586], 99.95th=[40633], 00:12:18.336 | 99.99th=[40633] 00:12:18.336 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:18.336 slat (nsec): min=8259, max=55220, avg=17436.33, stdev=7169.09 00:12:18.336 clat (usec): min=150, max=345, avg=195.30, stdev=22.78 00:12:18.336 lat (usec): min=162, max=354, avg=212.74, stdev=25.99 00:12:18.336 clat percentiles (usec): 00:12:18.336 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 178], 00:12:18.336 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:12:18.336 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 239], 00:12:18.336 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 306], 00:12:18.336 | 99.99th=[ 347] 00:12:18.336 bw ( KiB/s): min= 8192, max= 8192, per=39.01%, avg=8192.00, stdev= 0.00, samples=1 00:12:18.336 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:18.336 lat (usec) : 250=72.11%, 500=27.74%, 750=0.13% 00:12:18.336 lat (msec) : 50=0.03% 00:12:18.336 cpu : usr=4.90%, sys=7.70%, ctx=3909, majf=0, minf=1 00:12:18.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:18.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:18.337 issued rwts: total=1860,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:18.337 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:18.337 00:12:18.337 Run status group 0 (all jobs): 00:12:18.337 READ: bw=17.5MiB/s (18.3MB/s), 2046KiB/s-7433KiB/s (2095kB/s-7611kB/s), io=17.9MiB (18.8MB), run=1001-1027msec 00:12:18.337 WRITE: bw=20.5MiB/s (21.5MB/s), 2122KiB/s-8184KiB/s (2173kB/s-8380kB/s), io=21.1MiB (22.1MB), run=1001-1027msec 00:12:18.337 00:12:18.337 Disk stats (read/write): 00:12:18.337 nvme0n1: ios=894/1024, merge=0/0, ticks=626/194, in_queue=820, util=87.68% 00:12:18.337 nvme0n2: ios=213/512, merge=0/0, ticks=1001/111, in_queue=1112, util=95.74% 00:12:18.337 nvme0n3: ios=1064/1536, merge=0/0, ticks=1400/275, in_queue=1675, util=97.82% 00:12:18.337 nvme0n4: ios=1589/1800, merge=0/0, ticks=690/353, in_queue=1043, util=96.96% 00:12:18.337 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:18.337 [global] 00:12:18.337 thread=1 00:12:18.337 invalidate=1 00:12:18.337 rw=write 00:12:18.337 time_based=1 00:12:18.337 runtime=1 00:12:18.337 ioengine=libaio 00:12:18.337 direct=1 00:12:18.337 bs=4096 00:12:18.337 iodepth=128 00:12:18.337 norandommap=0 00:12:18.337 numjobs=1 00:12:18.337 00:12:18.337 verify_dump=1 00:12:18.337 verify_backlog=512 00:12:18.337 verify_state_save=0 00:12:18.337 do_verify=1 00:12:18.337 verify=crc32c-intel 00:12:18.337 [job0] 00:12:18.337 filename=/dev/nvme0n1 00:12:18.337 [job1] 00:12:18.337 filename=/dev/nvme0n2 00:12:18.337 [job2] 00:12:18.337 filename=/dev/nvme0n3 00:12:18.337 [job3] 00:12:18.337 filename=/dev/nvme0n4 00:12:18.337 Could not set queue depth (nvme0n1) 00:12:18.337 Could not set queue depth (nvme0n2) 00:12:18.337 Could not set queue depth (nvme0n3) 00:12:18.337 Could not set queue depth (nvme0n4) 00:12:18.337 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.337 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.337 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.337 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:18.337 fio-3.35 00:12:18.337 Starting 4 threads 00:12:19.715 00:12:19.715 job0: (groupid=0, jobs=1): err= 0: pid=3102774: Mon Nov 25 13:11:17 2024 00:12:19.715 read: IOPS=3466, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1002msec) 00:12:19.715 slat (usec): min=2, max=12444, avg=118.44, stdev=801.65 00:12:19.715 clat (usec): min=976, max=31416, avg=15058.21, stdev=4327.95 00:12:19.715 lat (usec): min=2949, max=32838, avg=15176.64, stdev=4407.60 00:12:19.715 clat percentiles (usec): 00:12:19.715 | 1.00th=[ 6194], 5.00th=[10159], 10.00th=[10814], 20.00th=[11731], 00:12:19.715 | 30.00th=[12387], 40.00th=[13304], 50.00th=[14353], 60.00th=[14746], 00:12:19.715 | 70.00th=[16581], 80.00th=[18220], 90.00th=[21365], 95.00th=[22938], 00:12:19.715 | 99.00th=[27919], 99.50th=[28967], 99.90th=[31327], 99.95th=[31327], 00:12:19.715 | 99.99th=[31327] 00:12:19.715 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:12:19.715 slat (usec): min=3, max=10202, avg=156.21, stdev=670.53 00:12:19.715 clat (usec): min=3831, max=44677, avg=20638.74, stdev=8403.70 00:12:19.715 lat (usec): min=3870, max=44690, avg=20794.95, stdev=8452.29 00:12:19.715 clat percentiles (usec): 00:12:19.715 | 1.00th=[ 6259], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[11731], 00:12:19.715 | 30.00th=[16581], 40.00th=[19006], 50.00th=[20841], 60.00th=[21627], 00:12:19.715 | 70.00th=[22676], 80.00th=[27919], 90.00th=[32900], 95.00th=[35914], 00:12:19.715 | 99.00th=[41681], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:12:19.715 | 99.99th=[44827] 00:12:19.715 bw ( KiB/s): min=14280, max=14392, per=24.15%, avg=14336.00, stdev=79.20, samples=2 00:12:19.715 iops : min= 3570, max= 3598, avg=3584.00, stdev=19.80, samples=2 00:12:19.715 lat (usec) : 1000=0.01% 00:12:19.715 lat (msec) : 4=0.28%, 10=7.34%, 20=57.28%, 50=35.09% 00:12:19.715 cpu : usr=3.60%, sys=5.09%, ctx=421, majf=0, minf=1 00:12:19.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:19.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.715 issued rwts: total=3473,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.715 job1: (groupid=0, jobs=1): err= 0: pid=3102775: Mon Nov 25 13:11:17 2024 00:12:19.715 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1003msec) 00:12:19.715 slat (usec): min=2, max=6690, avg=116.55, stdev=649.63 00:12:19.715 clat (usec): min=393, max=32693, avg=13749.30, stdev=3535.77 00:12:19.715 lat (usec): min=3068, max=32728, avg=13865.85, stdev=3606.73 00:12:19.715 clat percentiles (usec): 00:12:19.715 | 1.00th=[ 5211], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11338], 00:12:19.716 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13042], 60.00th=[13829], 00:12:19.716 | 70.00th=[14353], 80.00th=[15008], 90.00th=[18220], 95.00th=[20317], 00:12:19.716 | 99.00th=[26608], 99.50th=[28181], 99.90th=[32637], 99.95th=[32637], 00:12:19.716 | 99.99th=[32637] 00:12:19.716 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:12:19.716 slat (usec): min=4, max=7937, avg=129.38, stdev=564.83 00:12:19.716 clat (usec): min=5890, max=41927, avg=18726.32, stdev=8493.49 00:12:19.716 lat (usec): min=5899, max=41951, avg=18855.70, stdev=8550.03 00:12:19.716 clat percentiles (usec): 00:12:19.716 | 1.00th=[ 7701], 5.00th=[10683], 10.00th=[10814], 20.00th=[11469], 00:12:19.716 | 30.00th=[11994], 40.00th=[14091], 50.00th=[16450], 60.00th=[18744], 00:12:19.716 | 70.00th=[21365], 80.00th=[25560], 90.00th=[32900], 95.00th=[37487], 00:12:19.716 | 99.00th=[40633], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:12:19.716 | 99.99th=[41681] 00:12:19.716 bw ( KiB/s): min=15344, max=17080, per=27.31%, avg=16212.00, stdev=1227.54, samples=2 00:12:19.716 iops : min= 3836, max= 4270, avg=4053.00, stdev=306.88, samples=2 00:12:19.716 lat (usec) : 500=0.01% 00:12:19.716 lat (msec) : 4=0.41%, 10=4.55%, 20=74.60%, 50=20.42% 00:12:19.716 cpu : usr=5.89%, sys=8.68%, ctx=450, majf=0, minf=1 00:12:19.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:19.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.716 issued rwts: total=3669,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.716 job2: (groupid=0, jobs=1): err= 0: pid=3102776: Mon Nov 25 13:11:17 2024 00:12:19.716 read: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1020msec) 00:12:19.716 slat (usec): min=2, max=13773, avg=122.30, stdev=725.65 00:12:19.716 clat (usec): min=6788, max=35227, avg=15678.51, stdev=3912.40 00:12:19.716 lat (usec): min=6798, max=35244, avg=15800.81, stdev=3950.03 00:12:19.716 clat percentiles (usec): 00:12:19.716 | 1.00th=[ 8717], 5.00th=[11207], 10.00th=[11863], 20.00th=[13304], 00:12:19.716 | 30.00th=[13566], 40.00th=[13698], 50.00th=[14091], 60.00th=[15270], 00:12:19.716 | 70.00th=[16909], 80.00th=[19006], 90.00th=[20055], 95.00th=[21890], 00:12:19.716 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31065], 99.95th=[32900], 00:12:19.716 | 99.99th=[35390] 00:12:19.716 write: IOPS=3444, BW=13.5MiB/s (14.1MB/s)(13.7MiB/1020msec); 0 zone resets 00:12:19.716 slat (usec): min=3, max=22629, avg=162.75, stdev=1117.56 00:12:19.716 clat (usec): min=426, max=73378, avg=22785.13, stdev=12618.50 00:12:19.716 lat (usec): min=1040, max=73424, avg=22947.88, stdev=12707.96 00:12:19.716 clat percentiles (usec): 00:12:19.716 | 1.00th=[ 2868], 5.00th=[10421], 10.00th=[11731], 20.00th=[13173], 00:12:19.716 | 30.00th=[13960], 40.00th=[15795], 50.00th=[19792], 60.00th=[22414], 00:12:19.716 | 70.00th=[26084], 80.00th=[31327], 90.00th=[38536], 95.00th=[47973], 00:12:19.716 | 99.00th=[61604], 99.50th=[62129], 99.90th=[62129], 99.95th=[70779], 00:12:19.716 | 99.99th=[72877] 00:12:19.716 bw ( KiB/s): min=13200, max=13888, per=22.82%, avg=13544.00, stdev=486.49, samples=2 00:12:19.716 iops : min= 3300, max= 3472, avg=3386.00, stdev=121.62, samples=2 00:12:19.716 lat (usec) : 500=0.02% 00:12:19.716 lat (msec) : 2=0.35%, 4=0.97%, 10=1.97%, 20=65.19%, 50=28.99% 00:12:19.716 lat (msec) : 100=2.51% 00:12:19.716 cpu : usr=3.93%, sys=5.50%, ctx=340, majf=0, minf=1 00:12:19.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:19.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.716 issued rwts: total=3072,3513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.716 job3: (groupid=0, jobs=1): err= 0: pid=3102777: Mon Nov 25 13:11:17 2024 00:12:19.716 read: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1020msec) 00:12:19.716 slat (usec): min=2, max=13451, avg=132.20, stdev=922.30 00:12:19.716 clat (usec): min=4624, max=36428, avg=16725.12, stdev=5389.86 00:12:19.716 lat (usec): min=4643, max=36435, avg=16857.32, stdev=5457.12 00:12:19.716 clat percentiles (usec): 00:12:19.716 | 1.00th=[ 8029], 5.00th=[10552], 10.00th=[11600], 20.00th=[12387], 00:12:19.716 | 30.00th=[13042], 40.00th=[13960], 50.00th=[15008], 60.00th=[17433], 00:12:19.716 | 70.00th=[19268], 80.00th=[21103], 90.00th=[23987], 95.00th=[25297], 00:12:19.716 | 99.00th=[33424], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:12:19.716 | 99.99th=[36439] 00:12:19.716 write: IOPS=3864, BW=15.1MiB/s (15.8MB/s)(15.4MiB/1020msec); 0 zone resets 00:12:19.716 slat (usec): min=3, max=10568, avg=122.75, stdev=617.38 00:12:19.716 clat (usec): min=3210, max=46104, avg=17624.25, stdev=8609.04 00:12:19.716 lat (usec): min=3230, max=46125, avg=17747.00, stdev=8669.47 00:12:19.716 clat percentiles (usec): 00:12:19.716 | 1.00th=[ 5211], 5.00th=[ 8029], 10.00th=[10421], 20.00th=[12387], 00:12:19.716 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13566], 00:12:19.716 | 70.00th=[22152], 80.00th=[25297], 90.00th=[31327], 95.00th=[35390], 00:12:19.716 | 99.00th=[42206], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:12:19.716 | 99.99th=[45876] 00:12:19.716 bw ( KiB/s): min=14136, max=16384, per=25.71%, avg=15260.00, stdev=1589.58, samples=2 00:12:19.716 iops : min= 3534, max= 4096, avg=3815.00, stdev=397.39, samples=2 00:12:19.716 lat (msec) : 4=0.16%, 10=6.70%, 20=64.12%, 50=29.02% 00:12:19.716 cpu : usr=5.40%, sys=8.44%, ctx=418, majf=0, minf=1 00:12:19.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:19.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:19.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:19.716 issued rwts: total=3584,3942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:19.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:19.716 00:12:19.716 Run status group 0 (all jobs): 00:12:19.716 READ: bw=52.8MiB/s (55.4MB/s), 11.8MiB/s-14.3MiB/s (12.3MB/s-15.0MB/s), io=53.9MiB (56.5MB), run=1002-1020msec 00:12:19.716 WRITE: bw=58.0MiB/s (60.8MB/s), 13.5MiB/s-16.0MiB/s (14.1MB/s-16.7MB/s), io=59.1MiB (62.0MB), run=1002-1020msec 00:12:19.716 00:12:19.716 Disk stats (read/write): 00:12:19.716 nvme0n1: ios=2796/3072, merge=0/0, ticks=20524/31703, in_queue=52227, util=90.48% 00:12:19.716 nvme0n2: ios=3122/3479, merge=0/0, ticks=20890/30803, in_queue=51693, util=91.17% 00:12:19.716 nvme0n3: ios=2599/2882, merge=0/0, ticks=13526/24550, in_queue=38076, util=99.27% 00:12:19.716 nvme0n4: ios=3129/3554, merge=0/0, ticks=47938/56447, in_queue=104385, util=95.60% 00:12:19.716 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:19.716 [global] 00:12:19.716 thread=1 00:12:19.716 invalidate=1 00:12:19.716 rw=randwrite 00:12:19.716 time_based=1 00:12:19.716 runtime=1 00:12:19.716 ioengine=libaio 00:12:19.716 direct=1 00:12:19.716 bs=4096 00:12:19.716 iodepth=128 00:12:19.716 norandommap=0 00:12:19.716 numjobs=1 00:12:19.716 00:12:19.716 verify_dump=1 00:12:19.716 verify_backlog=512 00:12:19.716 verify_state_save=0 00:12:19.716 do_verify=1 00:12:19.716 verify=crc32c-intel 00:12:19.716 [job0] 00:12:19.716 filename=/dev/nvme0n1 00:12:19.716 [job1] 00:12:19.716 filename=/dev/nvme0n2 00:12:19.716 [job2] 00:12:19.716 filename=/dev/nvme0n3 00:12:19.716 [job3] 00:12:19.716 filename=/dev/nvme0n4 00:12:19.716 Could not set queue depth (nvme0n1) 00:12:19.716 Could not set queue depth (nvme0n2) 00:12:19.716 Could not set queue depth (nvme0n3) 00:12:19.716 Could not set queue depth (nvme0n4) 00:12:19.974 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:19.974 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:19.974 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:19.974 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:19.974 fio-3.35 00:12:19.974 Starting 4 threads 00:12:21.349 00:12:21.349 job0: (groupid=0, jobs=1): err= 0: pid=3103126: Mon Nov 25 13:11:18 2024 00:12:21.349 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:12:21.349 slat (usec): min=3, max=37795, avg=176.85, stdev=1091.98 00:12:21.349 clat (usec): min=5326, max=72615, avg=22184.17, stdev=11121.23 00:12:21.349 lat (usec): min=5345, max=72630, avg=22361.02, stdev=11225.26 00:12:21.349 clat percentiles (usec): 00:12:21.349 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10683], 00:12:21.349 | 30.00th=[12911], 40.00th=[19792], 50.00th=[21627], 60.00th=[23725], 00:12:21.349 | 70.00th=[27395], 80.00th=[31327], 90.00th=[33162], 95.00th=[35914], 00:12:21.349 | 99.00th=[66323], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:12:21.349 | 99.99th=[72877] 00:12:21.349 write: IOPS=3377, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1005msec); 0 zone resets 00:12:21.349 slat (usec): min=4, max=9864, avg=123.14, stdev=655.40 00:12:21.349 clat (usec): min=3524, max=66859, avg=17409.42, stdev=9330.06 00:12:21.349 lat (usec): min=5057, max=66865, avg=17532.55, stdev=9377.94 00:12:21.349 clat percentiles (usec): 00:12:21.349 | 1.00th=[ 5276], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10683], 00:12:21.349 | 30.00th=[11207], 40.00th=[12256], 50.00th=[14091], 60.00th=[16712], 00:12:21.349 | 70.00th=[18220], 80.00th=[25822], 90.00th=[28705], 95.00th=[34341], 00:12:21.349 | 99.00th=[61604], 99.50th=[62653], 99.90th=[66323], 99.95th=[66847], 00:12:21.349 | 99.99th=[66847] 00:12:21.349 bw ( KiB/s): min= 9752, max=16384, per=21.02%, avg=13068.00, stdev=4689.53, samples=2 00:12:21.349 iops : min= 2438, max= 4096, avg=3267.00, stdev=1172.38, samples=2 00:12:21.349 lat (msec) : 4=0.02%, 10=6.70%, 20=51.22%, 50=40.10%, 100=1.96% 00:12:21.349 cpu : usr=4.88%, sys=7.27%, ctx=310, majf=0, minf=1 00:12:21.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:21.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.349 issued rwts: total=3072,3394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.349 job1: (groupid=0, jobs=1): err= 0: pid=3103127: Mon Nov 25 13:11:18 2024 00:12:21.349 read: IOPS=4798, BW=18.7MiB/s (19.7MB/s)(18.8MiB/1002msec) 00:12:21.349 slat (usec): min=2, max=16675, avg=95.37, stdev=640.25 00:12:21.349 clat (usec): min=1027, max=30248, avg=11868.46, stdev=2606.32 00:12:21.349 lat (usec): min=2997, max=30311, avg=11963.82, stdev=2665.31 00:12:21.349 clat percentiles (usec): 00:12:21.349 | 1.00th=[ 5014], 5.00th=[ 7898], 10.00th=[ 9241], 20.00th=[10814], 00:12:21.349 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11338], 60.00th=[11600], 00:12:21.349 | 70.00th=[12125], 80.00th=[13566], 90.00th=[14746], 95.00th=[15795], 00:12:21.349 | 99.00th=[22938], 99.50th=[24249], 99.90th=[25560], 99.95th=[25560], 00:12:21.349 | 99.99th=[30278] 00:12:21.349 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:12:21.349 slat (usec): min=3, max=14361, avg=92.99, stdev=619.58 00:12:21.349 clat (usec): min=301, max=45115, avg=13574.51, stdev=6275.35 00:12:21.349 lat (usec): min=726, max=45131, avg=13667.51, stdev=6324.09 00:12:21.349 clat percentiles (usec): 00:12:21.349 | 1.00th=[ 4817], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10683], 00:12:21.349 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:12:21.349 | 70.00th=[11994], 80.00th=[15401], 90.00th=[20317], 95.00th=[31327], 00:12:21.349 | 99.00th=[34866], 99.50th=[36439], 99.90th=[41681], 99.95th=[41681], 00:12:21.349 | 99.99th=[45351] 00:12:21.349 bw ( KiB/s): min=18368, max=22592, per=32.94%, avg=20480.00, stdev=2986.82, samples=2 00:12:21.349 iops : min= 4592, max= 5648, avg=5120.00, stdev=746.70, samples=2 00:12:21.349 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.01% 00:12:21.349 lat (msec) : 2=0.28%, 4=0.38%, 10=11.80%, 20=81.72%, 50=5.73% 00:12:21.349 cpu : usr=5.19%, sys=7.89%, ctx=387, majf=0, minf=1 00:12:21.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:21.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.349 issued rwts: total=4808,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.349 job2: (groupid=0, jobs=1): err= 0: pid=3103128: Mon Nov 25 13:11:18 2024 00:12:21.349 read: IOPS=4426, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1009msec) 00:12:21.349 slat (usec): min=3, max=13943, avg=112.07, stdev=807.45 00:12:21.349 clat (usec): min=3136, max=38154, avg=14186.73, stdev=4324.67 00:12:21.349 lat (usec): min=4506, max=38193, avg=14298.79, stdev=4378.66 00:12:21.349 clat percentiles (usec): 00:12:21.349 | 1.00th=[ 7963], 5.00th=[10028], 10.00th=[10290], 20.00th=[11076], 00:12:21.349 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12518], 60.00th=[13829], 00:12:21.349 | 70.00th=[15139], 80.00th=[17171], 90.00th=[20055], 95.00th=[22676], 00:12:21.349 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[31327], 00:12:21.349 | 99.99th=[38011] 00:12:21.349 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:12:21.349 slat (usec): min=3, max=11423, avg=99.73, stdev=584.56 00:12:21.349 clat (usec): min=3369, max=90539, avg=14012.23, stdev=11163.73 00:12:21.349 lat (usec): min=3376, max=90556, avg=14111.97, stdev=11226.18 00:12:21.349 clat percentiles (usec): 00:12:21.349 | 1.00th=[ 4228], 5.00th=[ 6783], 10.00th=[ 8455], 20.00th=[11338], 00:12:21.349 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:12:21.349 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13829], 95.00th=[15008], 00:12:21.349 | 99.00th=[81265], 99.50th=[88605], 99.90th=[90702], 99.95th=[90702], 00:12:21.349 | 99.99th=[90702] 00:12:21.349 bw ( KiB/s): min=16944, max=19920, per=29.65%, avg=18432.00, stdev=2104.35, samples=2 00:12:21.349 iops : min= 4236, max= 4980, avg=4608.00, stdev=526.09, samples=2 00:12:21.349 lat (msec) : 4=0.32%, 10=9.54%, 20=82.74%, 50=6.00%, 100=1.40% 00:12:21.349 cpu : usr=5.85%, sys=8.53%, ctx=524, majf=0, minf=2 00:12:21.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:21.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.349 issued rwts: total=4466,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.349 job3: (groupid=0, jobs=1): err= 0: pid=3103129: Mon Nov 25 13:11:18 2024 00:12:21.349 read: IOPS=2368, BW=9473KiB/s (9700kB/s)(9520KiB/1005msec) 00:12:21.349 slat (usec): min=2, max=10409, avg=215.33, stdev=1078.89 00:12:21.349 clat (usec): min=1998, max=47255, avg=27337.73, stdev=7595.34 00:12:21.349 lat (usec): min=7958, max=47259, avg=27553.06, stdev=7589.17 00:12:21.349 clat percentiles (usec): 00:12:21.349 | 1.00th=[ 8160], 5.00th=[18220], 10.00th=[18482], 20.00th=[19530], 00:12:21.349 | 30.00th=[23462], 40.00th=[25822], 50.00th=[26870], 60.00th=[28181], 00:12:21.349 | 70.00th=[29754], 80.00th=[33162], 90.00th=[38011], 95.00th=[43779], 00:12:21.349 | 99.00th=[46924], 99.50th=[47449], 99.90th=[47449], 99.95th=[47449], 00:12:21.349 | 99.99th=[47449] 00:12:21.349 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:12:21.349 slat (usec): min=3, max=25578, avg=181.44, stdev=1150.91 00:12:21.349 clat (usec): min=5109, max=74874, avg=24089.02, stdev=10610.49 00:12:21.349 lat (usec): min=5116, max=74887, avg=24270.46, stdev=10647.78 00:12:21.349 clat percentiles (usec): 00:12:21.349 | 1.00th=[ 5669], 5.00th=[10683], 10.00th=[13304], 20.00th=[19530], 00:12:21.349 | 30.00th=[20579], 40.00th=[21627], 50.00th=[22152], 60.00th=[22676], 00:12:21.349 | 70.00th=[24773], 80.00th=[26870], 90.00th=[36963], 95.00th=[49546], 00:12:21.350 | 99.00th=[61080], 99.50th=[63177], 99.90th=[64226], 99.95th=[65274], 00:12:21.350 | 99.99th=[74974] 00:12:21.350 bw ( KiB/s): min= 8952, max=11528, per=16.47%, avg=10240.00, stdev=1821.51, samples=2 00:12:21.350 iops : min= 2238, max= 2882, avg=2560.00, stdev=455.38, samples=2 00:12:21.350 lat (msec) : 2=0.02%, 10=2.49%, 20=19.60%, 50=75.69%, 100=2.21% 00:12:21.350 cpu : usr=2.89%, sys=4.98%, ctx=182, majf=0, minf=1 00:12:21.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:12:21.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:21.350 issued rwts: total=2380,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:21.350 00:12:21.350 Run status group 0 (all jobs): 00:12:21.350 READ: bw=57.0MiB/s (59.8MB/s), 9473KiB/s-18.7MiB/s (9700kB/s-19.7MB/s), io=57.5MiB (60.3MB), run=1002-1009msec 00:12:21.350 WRITE: bw=60.7MiB/s (63.7MB/s), 9.95MiB/s-20.0MiB/s (10.4MB/s-20.9MB/s), io=61.3MiB (64.2MB), run=1002-1009msec 00:12:21.350 00:12:21.350 Disk stats (read/write): 00:12:21.350 nvme0n1: ios=2764/3072, merge=0/0, ticks=27034/25669, in_queue=52703, util=97.80% 00:12:21.350 nvme0n2: ios=4134/4123, merge=0/0, ticks=29765/39649, in_queue=69414, util=97.26% 00:12:21.350 nvme0n3: ios=3584/3927, merge=0/0, ticks=48702/54785, in_queue=103487, util=89.06% 00:12:21.350 nvme0n4: ios=2094/2091, merge=0/0, ticks=15471/16066, in_queue=31537, util=99.48% 00:12:21.350 13:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:21.350 13:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3103267 00:12:21.350 13:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:21.350 13:11:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:21.350 [global] 00:12:21.350 thread=1 00:12:21.350 invalidate=1 00:12:21.350 rw=read 00:12:21.350 time_based=1 00:12:21.350 runtime=10 00:12:21.350 ioengine=libaio 00:12:21.350 direct=1 00:12:21.350 bs=4096 00:12:21.350 iodepth=1 00:12:21.350 norandommap=1 00:12:21.350 numjobs=1 00:12:21.350 00:12:21.350 [job0] 00:12:21.350 filename=/dev/nvme0n1 00:12:21.350 [job1] 00:12:21.350 filename=/dev/nvme0n2 00:12:21.350 [job2] 00:12:21.350 filename=/dev/nvme0n3 00:12:21.350 [job3] 00:12:21.350 filename=/dev/nvme0n4 00:12:21.350 Could not set queue depth (nvme0n1) 00:12:21.350 Could not set queue depth (nvme0n2) 00:12:21.350 Could not set queue depth (nvme0n3) 00:12:21.350 Could not set queue depth (nvme0n4) 00:12:21.350 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.350 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.350 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.350 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:21.350 fio-3.35 00:12:21.350 Starting 4 threads 00:12:24.633 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:24.633 13:11:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:24.633 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34549760, buflen=4096 00:12:24.633 fio: pid=3103358, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:24.633 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:24.633 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:24.633 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=966656, buflen=4096 00:12:24.633 fio: pid=3103357, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:25.199 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46882816, buflen=4096 00:12:25.199 fio: pid=3103355, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:25.199 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:25.199 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:25.199 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:25.199 13:11:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:25.199 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=55443456, buflen=4096 00:12:25.199 fio: pid=3103356, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:25.457 00:12:25.457 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3103355: Mon Nov 25 13:11:22 2024 00:12:25.457 read: IOPS=3195, BW=12.5MiB/s (13.1MB/s)(44.7MiB/3582msec) 00:12:25.457 slat (usec): min=4, max=35406, avg=20.68, stdev=438.00 00:12:25.457 clat (usec): min=180, max=40767, avg=286.82, stdev=554.77 00:12:25.457 lat (usec): min=186, max=40775, avg=307.51, stdev=707.15 00:12:25.457 clat percentiles (usec): 00:12:25.457 | 1.00th=[ 204], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 237], 00:12:25.457 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 273], 00:12:25.457 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 355], 95.00th=[ 400], 00:12:25.457 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 545], 99.95th=[ 2343], 00:12:25.457 | 99.99th=[40633] 00:12:25.457 bw ( KiB/s): min=11208, max=14832, per=37.22%, avg=12949.33, stdev=1345.75, samples=6 00:12:25.457 iops : min= 2802, max= 3708, avg=3237.33, stdev=336.44, samples=6 00:12:25.457 lat (usec) : 250=39.85%, 500=59.55%, 750=0.52% 00:12:25.457 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01%, 20=0.01%, 50=0.02% 00:12:25.457 cpu : usr=2.18%, sys=4.97%, ctx=11451, majf=0, minf=1 00:12:25.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.457 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.457 issued rwts: total=11447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.457 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3103356: Mon Nov 25 13:11:22 2024 00:12:25.457 read: IOPS=3498, BW=13.7MiB/s (14.3MB/s)(52.9MiB/3869msec) 00:12:25.457 slat (usec): min=4, max=15715, avg=14.56, stdev=246.31 00:12:25.457 clat (usec): min=154, max=41092, avg=266.99, stdev=1104.04 00:12:25.457 lat (usec): min=158, max=41105, avg=281.56, stdev=1131.52 00:12:25.457 clat percentiles (usec): 00:12:25.457 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 208], 00:12:25.457 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:12:25.457 | 70.00th=[ 247], 80.00th=[ 262], 90.00th=[ 281], 95.00th=[ 297], 00:12:25.457 | 99.00th=[ 379], 99.50th=[ 486], 99.90th=[ 1012], 99.95th=[40633], 00:12:25.457 | 99.99th=[41157] 00:12:25.457 bw ( KiB/s): min=11896, max=17200, per=43.28%, avg=15057.14, stdev=1767.63, samples=7 00:12:25.457 iops : min= 2974, max= 4300, avg=3764.29, stdev=441.91, samples=7 00:12:25.457 lat (usec) : 250=72.74%, 500=26.82%, 750=0.29%, 1000=0.04% 00:12:25.457 lat (msec) : 2=0.03%, 50=0.07% 00:12:25.457 cpu : usr=1.63%, sys=4.16%, ctx=13541, majf=0, minf=2 00:12:25.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.457 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.457 issued rwts: total=13537,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.457 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3103357: Mon Nov 25 13:11:22 2024 00:12:25.457 read: IOPS=71, BW=287KiB/s (294kB/s)(944KiB/3293msec) 00:12:25.457 slat (usec): min=6, max=14928, avg=77.88, stdev=968.78 00:12:25.457 clat (usec): min=200, max=41218, avg=13767.62, stdev=19157.40 00:12:25.457 lat (usec): min=210, max=55980, avg=13845.77, stdev=19275.57 00:12:25.457 clat percentiles (usec): 00:12:25.457 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 245], 00:12:25.457 | 30.00th=[ 258], 40.00th=[ 330], 50.00th=[ 445], 60.00th=[ 515], 00:12:25.457 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:25.457 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:25.457 | 99.99th=[41157] 00:12:25.457 bw ( KiB/s): min= 96, max= 1216, per=0.88%, avg=305.33, stdev=448.18, samples=6 00:12:25.457 iops : min= 24, max= 304, avg=76.33, stdev=112.05, samples=6 00:12:25.457 lat (usec) : 250=26.16%, 500=29.96%, 750=10.55% 00:12:25.457 lat (msec) : 50=32.91% 00:12:25.457 cpu : usr=0.09%, sys=0.09%, ctx=238, majf=0, minf=2 00:12:25.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.457 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.457 issued rwts: total=237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.457 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3103358: Mon Nov 25 13:11:22 2024 00:12:25.457 read: IOPS=2855, BW=11.2MiB/s (11.7MB/s)(32.9MiB/2954msec) 00:12:25.457 slat (nsec): min=5562, max=54276, avg=12663.43, stdev=5502.48 00:12:25.457 clat (usec): min=192, max=41375, avg=331.60, stdev=1527.12 00:12:25.457 lat (usec): min=201, max=41399, avg=344.26, stdev=1527.52 00:12:25.457 clat percentiles (usec): 00:12:25.457 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:12:25.457 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 277], 60.00th=[ 293], 00:12:25.457 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 334], 00:12:25.457 | 99.00th=[ 392], 99.50th=[ 494], 99.90th=[40633], 99.95th=[40633], 00:12:25.457 | 99.99th=[41157] 00:12:25.457 bw ( KiB/s): min= 6344, max=14912, per=33.98%, avg=11824.00, stdev=3268.47, samples=5 00:12:25.457 iops : min= 1586, max= 3728, avg=2956.00, stdev=817.12, samples=5 00:12:25.457 lat (usec) : 250=40.10%, 500=59.44%, 750=0.31% 00:12:25.457 lat (msec) : 50=0.14% 00:12:25.457 cpu : usr=1.73%, sys=4.84%, ctx=8438, majf=0, minf=2 00:12:25.457 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.457 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.457 issued rwts: total=8436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.457 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.457 00:12:25.457 Run status group 0 (all jobs): 00:12:25.457 READ: bw=34.0MiB/s (35.6MB/s), 287KiB/s-13.7MiB/s (294kB/s-14.3MB/s), io=131MiB (138MB), run=2954-3869msec 00:12:25.457 00:12:25.457 Disk stats (read/write): 00:12:25.457 nvme0n1: ios=10691/0, merge=0/0, ticks=2985/0, in_queue=2985, util=93.68% 00:12:25.457 nvme0n2: ios=13537/0, merge=0/0, ticks=3529/0, in_queue=3529, util=95.43% 00:12:25.457 nvme0n3: ios=232/0, merge=0/0, ticks=3087/0, in_queue=3087, util=96.36% 00:12:25.457 nvme0n4: ios=8478/0, merge=0/0, ticks=3638/0, in_queue=3638, util=100.00% 00:12:25.715 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:25.715 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:25.974 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:25.974 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:26.232 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:26.232 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:26.491 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:26.491 13:11:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:26.749 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:26.749 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3103267 00:12:26.749 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:26.749 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.749 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.749 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.749 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.749 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.023 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:27.023 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.023 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:27.023 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:27.023 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:27.023 nvmf hotplug test: fio failed as expected 00:12:27.023 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.282 rmmod nvme_tcp 00:12:27.282 rmmod nvme_fabrics 00:12:27.282 rmmod nvme_keyring 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3101226 ']' 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3101226 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3101226 ']' 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3101226 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3101226 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3101226' 00:12:27.282 killing process with pid 3101226 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3101226 00:12:27.282 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3101226 00:12:27.540 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.540 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.540 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.540 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:27.540 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:27.540 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.540 13:11:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.540 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.540 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.540 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.540 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.540 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.444 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.444 00:12:29.444 real 0m24.354s 00:12:29.444 user 1m24.122s 00:12:29.444 sys 0m7.977s 00:12:29.444 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.444 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.444 ************************************ 00:12:29.444 END TEST nvmf_fio_target 00:12:29.444 ************************************ 00:12:29.444 13:11:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:29.444 13:11:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.444 13:11:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.444 13:11:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:29.703 ************************************ 00:12:29.703 START TEST nvmf_bdevio 00:12:29.703 ************************************ 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:29.703 * Looking for test storage... 00:12:29.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:29.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.703 --rc genhtml_branch_coverage=1 00:12:29.703 --rc genhtml_function_coverage=1 00:12:29.703 --rc genhtml_legend=1 00:12:29.703 --rc geninfo_all_blocks=1 00:12:29.703 --rc geninfo_unexecuted_blocks=1 00:12:29.703 00:12:29.703 ' 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:29.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.703 --rc genhtml_branch_coverage=1 00:12:29.703 --rc genhtml_function_coverage=1 00:12:29.703 --rc genhtml_legend=1 00:12:29.703 --rc geninfo_all_blocks=1 00:12:29.703 --rc geninfo_unexecuted_blocks=1 00:12:29.703 00:12:29.703 ' 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:29.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.703 --rc genhtml_branch_coverage=1 00:12:29.703 --rc genhtml_function_coverage=1 00:12:29.703 --rc genhtml_legend=1 00:12:29.703 --rc geninfo_all_blocks=1 00:12:29.703 --rc geninfo_unexecuted_blocks=1 00:12:29.703 00:12:29.703 ' 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:29.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.703 --rc genhtml_branch_coverage=1 00:12:29.703 --rc genhtml_function_coverage=1 00:12:29.703 --rc genhtml_legend=1 00:12:29.703 --rc geninfo_all_blocks=1 00:12:29.703 --rc geninfo_unexecuted_blocks=1 00:12:29.703 00:12:29.703 ' 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.703 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.704 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:32.324 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:32.324 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:32.324 Found net devices under 0000:09:00.0: cvl_0_0 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:32.324 Found net devices under 0000:09:00.1: cvl_0_1 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.324 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:12:32.325 00:12:32.325 --- 10.0.0.2 ping statistics --- 00:12:32.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.325 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:12:32.325 00:12:32.325 --- 10.0.0.1 ping statistics --- 00:12:32.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.325 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3106004 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3106004 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3106004 ']' 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.325 [2024-11-25 13:11:29.648848] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:12:32.325 [2024-11-25 13:11:29.648930] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.325 [2024-11-25 13:11:29.722387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.325 [2024-11-25 13:11:29.784619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.325 [2024-11-25 13:11:29.784667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.325 [2024-11-25 13:11:29.784695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.325 [2024-11-25 13:11:29.784706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.325 [2024-11-25 13:11:29.784715] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.325 [2024-11-25 13:11:29.786235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:32.325 [2024-11-25 13:11:29.786301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:32.325 [2024-11-25 13:11:29.786367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:32.325 [2024-11-25 13:11:29.786370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.325 [2024-11-25 13:11:29.929599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.325 Malloc0 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.325 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:32.583 [2024-11-25 13:11:29.993517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:32.583 { 00:12:32.583 "params": { 00:12:32.583 "name": "Nvme$subsystem", 00:12:32.583 "trtype": "$TEST_TRANSPORT", 00:12:32.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.583 "adrfam": "ipv4", 00:12:32.583 "trsvcid": "$NVMF_PORT", 00:12:32.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.583 "hdgst": ${hdgst:-false}, 00:12:32.583 "ddgst": ${ddgst:-false} 00:12:32.583 }, 00:12:32.583 "method": "bdev_nvme_attach_controller" 00:12:32.583 } 00:12:32.583 EOF 00:12:32.583 )") 00:12:32.583 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:32.583 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:32.583 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:32.583 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:32.583 "params": { 00:12:32.583 "name": "Nvme1", 00:12:32.583 "trtype": "tcp", 00:12:32.583 "traddr": "10.0.0.2", 00:12:32.583 "adrfam": "ipv4", 00:12:32.583 "trsvcid": "4420", 00:12:32.583 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:32.583 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.583 "hdgst": false, 00:12:32.583 "ddgst": false 00:12:32.583 }, 00:12:32.583 "method": "bdev_nvme_attach_controller" 00:12:32.583 }' 00:12:32.583 [2024-11-25 13:11:30.039893] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:12:32.583 [2024-11-25 13:11:30.039977] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106148 ] 00:12:32.583 [2024-11-25 13:11:30.110424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:32.583 [2024-11-25 13:11:30.176460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.583 [2024-11-25 13:11:30.176511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.583 [2024-11-25 13:11:30.176515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.840 I/O targets: 00:12:32.840 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:32.840 00:12:32.840 00:12:32.840 CUnit - A unit testing framework for C - Version 2.1-3 00:12:32.840 http://cunit.sourceforge.net/ 00:12:32.840 00:12:32.840 00:12:32.840 Suite: bdevio tests on: Nvme1n1 00:12:33.098 Test: blockdev write read block ...passed 00:12:33.098 Test: blockdev write zeroes read block ...passed 00:12:33.098 Test: blockdev write zeroes read no split ...passed 00:12:33.098 Test: blockdev write zeroes read split ...passed 00:12:33.098 Test: blockdev write zeroes read split partial ...passed 00:12:33.098 Test: blockdev reset ...[2024-11-25 13:11:30.641346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:33.098 [2024-11-25 13:11:30.641451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2189680 (9): Bad file descriptor 00:12:33.098 [2024-11-25 13:11:30.695627] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:33.098 passed 00:12:33.098 Test: blockdev write read 8 blocks ...passed 00:12:33.098 Test: blockdev write read size > 128k ...passed 00:12:33.098 Test: blockdev write read invalid size ...passed 00:12:33.098 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.098 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.098 Test: blockdev write read max offset ...passed 00:12:33.357 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.357 Test: blockdev writev readv 8 blocks ...passed 00:12:33.357 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.357 Test: blockdev writev readv block ...passed 00:12:33.357 Test: blockdev writev readv size > 128k ...passed 00:12:33.357 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.357 Test: blockdev comparev and writev ...[2024-11-25 13:11:30.909566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.357 [2024-11-25 13:11:30.909602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.909627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.357 [2024-11-25 13:11:30.909643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.910015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.357 [2024-11-25 13:11:30.910040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.910061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.357 [2024-11-25 13:11:30.910076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.910436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.357 [2024-11-25 13:11:30.910461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.910481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.357 [2024-11-25 13:11:30.910498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.910855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.357 [2024-11-25 13:11:30.910880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.910902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:33.357 [2024-11-25 13:11:30.910918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:33.357 passed 00:12:33.357 Test: blockdev nvme passthru rw ...passed 00:12:33.357 Test: blockdev nvme passthru vendor specific ...[2024-11-25 13:11:30.993596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.357 [2024-11-25 13:11:30.993624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.993775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.357 [2024-11-25 13:11:30.993799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.993934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.357 [2024-11-25 13:11:30.993957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:33.357 [2024-11-25 13:11:30.994091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:33.357 [2024-11-25 13:11:30.994114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:33.357 passed 00:12:33.357 Test: blockdev nvme admin passthru ...passed 00:12:33.615 Test: blockdev copy ...passed 00:12:33.615 00:12:33.615 Run Summary: Type Total Ran Passed Failed Inactive 00:12:33.615 suites 1 1 n/a 0 0 00:12:33.615 tests 23 23 23 0 0 00:12:33.615 asserts 152 152 152 0 n/a 00:12:33.615 00:12:33.615 Elapsed time = 1.134 seconds 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.615 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.615 rmmod nvme_tcp 00:12:33.615 rmmod nvme_fabrics 00:12:33.873 rmmod nvme_keyring 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3106004 ']' 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3106004 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3106004 ']' 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3106004 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3106004 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3106004' 00:12:33.873 killing process with pid 3106004 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3106004 00:12:33.873 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3106004 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.132 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.037 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.037 00:12:36.037 real 0m6.549s 00:12:36.037 user 0m10.498s 00:12:36.037 sys 0m2.193s 00:12:36.037 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.037 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:36.037 ************************************ 00:12:36.037 END TEST nvmf_bdevio 00:12:36.037 ************************************ 00:12:36.037 13:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:36.037 00:12:36.037 real 3m56.258s 00:12:36.037 user 10m15.461s 00:12:36.037 sys 1m8.696s 00:12:36.037 13:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.037 13:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:36.037 ************************************ 00:12:36.037 END TEST nvmf_target_core 00:12:36.037 ************************************ 00:12:36.297 13:11:33 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:36.297 13:11:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.297 13:11:33 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.297 13:11:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.297 ************************************ 00:12:36.297 START TEST nvmf_target_extra 00:12:36.297 ************************************ 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:36.297 * Looking for test storage... 00:12:36.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.297 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:36.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.298 --rc genhtml_branch_coverage=1 00:12:36.298 --rc genhtml_function_coverage=1 00:12:36.298 --rc genhtml_legend=1 00:12:36.298 --rc geninfo_all_blocks=1 00:12:36.298 --rc geninfo_unexecuted_blocks=1 00:12:36.298 00:12:36.298 ' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:36.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.298 --rc genhtml_branch_coverage=1 00:12:36.298 --rc genhtml_function_coverage=1 00:12:36.298 --rc genhtml_legend=1 00:12:36.298 --rc geninfo_all_blocks=1 00:12:36.298 --rc geninfo_unexecuted_blocks=1 00:12:36.298 00:12:36.298 ' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:36.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.298 --rc genhtml_branch_coverage=1 00:12:36.298 --rc genhtml_function_coverage=1 00:12:36.298 --rc genhtml_legend=1 00:12:36.298 --rc geninfo_all_blocks=1 00:12:36.298 --rc geninfo_unexecuted_blocks=1 00:12:36.298 00:12:36.298 ' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:36.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.298 --rc genhtml_branch_coverage=1 00:12:36.298 --rc genhtml_function_coverage=1 00:12:36.298 --rc genhtml_legend=1 00:12:36.298 --rc geninfo_all_blocks=1 00:12:36.298 --rc geninfo_unexecuted_blocks=1 00:12:36.298 00:12:36.298 ' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.298 ************************************ 00:12:36.298 START TEST nvmf_example 00:12:36.298 ************************************ 00:12:36.298 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:36.558 * Looking for test storage... 00:12:36.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.558 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:36.558 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:36.558 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:36.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.558 --rc genhtml_branch_coverage=1 00:12:36.558 --rc genhtml_function_coverage=1 00:12:36.558 --rc genhtml_legend=1 00:12:36.558 --rc geninfo_all_blocks=1 00:12:36.558 --rc geninfo_unexecuted_blocks=1 00:12:36.558 00:12:36.558 ' 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:36.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.558 --rc genhtml_branch_coverage=1 00:12:36.558 --rc genhtml_function_coverage=1 00:12:36.558 --rc genhtml_legend=1 00:12:36.558 --rc geninfo_all_blocks=1 00:12:36.558 --rc geninfo_unexecuted_blocks=1 00:12:36.558 00:12:36.558 ' 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:36.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.558 --rc genhtml_branch_coverage=1 00:12:36.558 --rc genhtml_function_coverage=1 00:12:36.558 --rc genhtml_legend=1 00:12:36.558 --rc geninfo_all_blocks=1 00:12:36.558 --rc geninfo_unexecuted_blocks=1 00:12:36.558 00:12:36.558 ' 00:12:36.558 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:36.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.558 --rc genhtml_branch_coverage=1 00:12:36.558 --rc genhtml_function_coverage=1 00:12:36.558 --rc genhtml_legend=1 00:12:36.558 --rc geninfo_all_blocks=1 00:12:36.558 --rc geninfo_unexecuted_blocks=1 00:12:36.558 00:12:36.558 ' 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.559 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:39.098 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:39.098 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:39.098 Found net devices under 0000:09:00.0: cvl_0_0 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:39.098 Found net devices under 0000:09:00.1: cvl_0_1 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.098 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:39.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:12:39.099 00:12:39.099 --- 10.0.0.2 ping statistics --- 00:12:39.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.099 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:12:39.099 00:12:39.099 --- 10.0.0.1 ping statistics --- 00:12:39.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.099 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3108292 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3108292 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3108292 ']' 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.099 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:39.358 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:49.322 Initializing NVMe Controllers 00:12:49.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:49.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:49.322 Initialization complete. Launching workers. 00:12:49.322 ======================================================== 00:12:49.322 Latency(us) 00:12:49.322 Device Information : IOPS MiB/s Average min max 00:12:49.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14749.26 57.61 4338.77 798.78 18047.60 00:12:49.322 ======================================================== 00:12:49.322 Total : 14749.26 57.61 4338.77 798.78 18047.60 00:12:49.322 00:12:49.322 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:49.322 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:49.322 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.322 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:49.322 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.322 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:49.322 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.322 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.322 rmmod nvme_tcp 00:12:49.580 rmmod nvme_fabrics 00:12:49.580 rmmod nvme_keyring 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3108292 ']' 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3108292 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3108292 ']' 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3108292 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108292 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108292' 00:12:49.580 killing process with pid 3108292 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3108292 00:12:49.580 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3108292 00:12:49.838 nvmf threads initialize successfully 00:12:49.838 bdev subsystem init successfully 00:12:49.838 created a nvmf target service 00:12:49.838 create targets's poll groups done 00:12:49.838 all subsystems of target started 00:12:49.838 nvmf target is running 00:12:49.838 all subsystems of target stopped 00:12:49.838 destroy targets's poll groups done 00:12:49.838 destroyed the nvmf target service 00:12:49.838 bdev subsystem finish successfully 00:12:49.838 nvmf threads destroy successfully 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.838 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.742 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.742 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:51.742 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:51.742 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 00:12:52.002 real 0m15.502s 00:12:52.002 user 0m42.696s 00:12:52.002 sys 0m3.260s 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 ************************************ 00:12:52.002 END TEST nvmf_example 00:12:52.002 ************************************ 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.002 ************************************ 00:12:52.002 START TEST nvmf_filesystem 00:12:52.002 ************************************ 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:52.002 * Looking for test storage... 00:12:52.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.002 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.003 --rc genhtml_branch_coverage=1 00:12:52.003 --rc genhtml_function_coverage=1 00:12:52.003 --rc genhtml_legend=1 00:12:52.003 --rc geninfo_all_blocks=1 00:12:52.003 --rc geninfo_unexecuted_blocks=1 00:12:52.003 00:12:52.003 ' 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.003 --rc genhtml_branch_coverage=1 00:12:52.003 --rc genhtml_function_coverage=1 00:12:52.003 --rc genhtml_legend=1 00:12:52.003 --rc geninfo_all_blocks=1 00:12:52.003 --rc geninfo_unexecuted_blocks=1 00:12:52.003 00:12:52.003 ' 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.003 --rc genhtml_branch_coverage=1 00:12:52.003 --rc genhtml_function_coverage=1 00:12:52.003 --rc genhtml_legend=1 00:12:52.003 --rc geninfo_all_blocks=1 00:12:52.003 --rc geninfo_unexecuted_blocks=1 00:12:52.003 00:12:52.003 ' 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:52.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.003 --rc genhtml_branch_coverage=1 00:12:52.003 --rc genhtml_function_coverage=1 00:12:52.003 --rc genhtml_legend=1 00:12:52.003 --rc geninfo_all_blocks=1 00:12:52.003 --rc geninfo_unexecuted_blocks=1 00:12:52.003 00:12:52.003 ' 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:52.003 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:52.004 #define SPDK_CONFIG_H 00:12:52.004 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:52.004 #define SPDK_CONFIG_APPS 1 00:12:52.004 #define SPDK_CONFIG_ARCH native 00:12:52.004 #undef SPDK_CONFIG_ASAN 00:12:52.004 #undef SPDK_CONFIG_AVAHI 00:12:52.004 #undef SPDK_CONFIG_CET 00:12:52.004 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:52.004 #define SPDK_CONFIG_COVERAGE 1 00:12:52.004 #define SPDK_CONFIG_CROSS_PREFIX 00:12:52.004 #undef SPDK_CONFIG_CRYPTO 00:12:52.004 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:52.004 #undef SPDK_CONFIG_CUSTOMOCF 00:12:52.004 #undef SPDK_CONFIG_DAOS 00:12:52.004 #define SPDK_CONFIG_DAOS_DIR 00:12:52.004 #define SPDK_CONFIG_DEBUG 1 00:12:52.004 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:52.004 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:52.004 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:52.004 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:52.004 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:52.004 #undef SPDK_CONFIG_DPDK_UADK 00:12:52.004 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:52.004 #define SPDK_CONFIG_EXAMPLES 1 00:12:52.004 #undef SPDK_CONFIG_FC 00:12:52.004 #define SPDK_CONFIG_FC_PATH 00:12:52.004 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:52.004 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:52.004 #define SPDK_CONFIG_FSDEV 1 00:12:52.004 #undef SPDK_CONFIG_FUSE 00:12:52.004 #undef SPDK_CONFIG_FUZZER 00:12:52.004 #define SPDK_CONFIG_FUZZER_LIB 00:12:52.004 #undef SPDK_CONFIG_GOLANG 00:12:52.004 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:52.004 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:52.004 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:52.004 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:52.004 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:52.004 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:52.004 #undef SPDK_CONFIG_HAVE_LZ4 00:12:52.004 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:52.004 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:52.004 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:52.004 #define SPDK_CONFIG_IDXD 1 00:12:52.004 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:52.004 #undef SPDK_CONFIG_IPSEC_MB 00:12:52.004 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:52.004 #define SPDK_CONFIG_ISAL 1 00:12:52.004 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:52.004 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:52.004 #define SPDK_CONFIG_LIBDIR 00:12:52.004 #undef SPDK_CONFIG_LTO 00:12:52.004 #define SPDK_CONFIG_MAX_LCORES 128 00:12:52.004 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:52.004 #define SPDK_CONFIG_NVME_CUSE 1 00:12:52.004 #undef SPDK_CONFIG_OCF 00:12:52.004 #define SPDK_CONFIG_OCF_PATH 00:12:52.004 #define SPDK_CONFIG_OPENSSL_PATH 00:12:52.004 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:52.004 #define SPDK_CONFIG_PGO_DIR 00:12:52.004 #undef SPDK_CONFIG_PGO_USE 00:12:52.004 #define SPDK_CONFIG_PREFIX /usr/local 00:12:52.004 #undef SPDK_CONFIG_RAID5F 00:12:52.004 #undef SPDK_CONFIG_RBD 00:12:52.004 #define SPDK_CONFIG_RDMA 1 00:12:52.004 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:52.004 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:52.004 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:52.004 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:52.004 #define SPDK_CONFIG_SHARED 1 00:12:52.004 #undef SPDK_CONFIG_SMA 00:12:52.004 #define SPDK_CONFIG_TESTS 1 00:12:52.004 #undef SPDK_CONFIG_TSAN 00:12:52.004 #define SPDK_CONFIG_UBLK 1 00:12:52.004 #define SPDK_CONFIG_UBSAN 1 00:12:52.004 #undef SPDK_CONFIG_UNIT_TESTS 00:12:52.004 #undef SPDK_CONFIG_URING 00:12:52.004 #define SPDK_CONFIG_URING_PATH 00:12:52.004 #undef SPDK_CONFIG_URING_ZNS 00:12:52.004 #undef SPDK_CONFIG_USDT 00:12:52.004 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:52.004 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:52.004 #define SPDK_CONFIG_VFIO_USER 1 00:12:52.004 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:52.004 #define SPDK_CONFIG_VHOST 1 00:12:52.004 #define SPDK_CONFIG_VIRTIO 1 00:12:52.004 #undef SPDK_CONFIG_VTUNE 00:12:52.004 #define SPDK_CONFIG_VTUNE_DIR 00:12:52.004 #define SPDK_CONFIG_WERROR 1 00:12:52.004 #define SPDK_CONFIG_WPDK_DIR 00:12:52.004 #undef SPDK_CONFIG_XNVME 00:12:52.004 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.004 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.005 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.005 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:52.005 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.005 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:52.005 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:52.005 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:52.005 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:52.005 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:52.268 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:52.269 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3109982 ]] 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3109982 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.OIxpz8 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.OIxpz8/tests/target /tmp/spdk.OIxpz8 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:52.270 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50774929408 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11213590528 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375265280 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22441984 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=29919629312 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074630656 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:52.271 * Looking for test storage... 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=50774929408 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13428183040 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.271 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:52.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.271 --rc genhtml_branch_coverage=1 00:12:52.271 --rc genhtml_function_coverage=1 00:12:52.271 --rc genhtml_legend=1 00:12:52.271 --rc geninfo_all_blocks=1 00:12:52.271 --rc geninfo_unexecuted_blocks=1 00:12:52.271 00:12:52.271 ' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:52.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.272 --rc genhtml_branch_coverage=1 00:12:52.272 --rc genhtml_function_coverage=1 00:12:52.272 --rc genhtml_legend=1 00:12:52.272 --rc geninfo_all_blocks=1 00:12:52.272 --rc geninfo_unexecuted_blocks=1 00:12:52.272 00:12:52.272 ' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:52.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.272 --rc genhtml_branch_coverage=1 00:12:52.272 --rc genhtml_function_coverage=1 00:12:52.272 --rc genhtml_legend=1 00:12:52.272 --rc geninfo_all_blocks=1 00:12:52.272 --rc geninfo_unexecuted_blocks=1 00:12:52.272 00:12:52.272 ' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:52.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.272 --rc genhtml_branch_coverage=1 00:12:52.272 --rc genhtml_function_coverage=1 00:12:52.272 --rc genhtml_legend=1 00:12:52.272 --rc geninfo_all_blocks=1 00:12:52.272 --rc geninfo_unexecuted_blocks=1 00:12:52.272 00:12:52.272 ' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.272 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.805 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:54.806 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:54.806 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:54.806 Found net devices under 0000:09:00.0: cvl_0_0 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:54.806 Found net devices under 0000:09:00.1: cvl_0_1 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:12:54.806 00:12:54.806 --- 10.0.0.2 ping statistics --- 00:12:54.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.806 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:12:54.806 00:12:54.806 --- 10.0.0.1 ping statistics --- 00:12:54.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.806 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:54.806 ************************************ 00:12:54.806 START TEST nvmf_filesystem_no_in_capsule 00:12:54.806 ************************************ 00:12:54.806 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3111629 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3111629 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3111629 ']' 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.807 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.807 [2024-11-25 13:11:52.281414] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:12:54.807 [2024-11-25 13:11:52.281505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.807 [2024-11-25 13:11:52.356351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.807 [2024-11-25 13:11:52.415382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.807 [2024-11-25 13:11:52.415437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.807 [2024-11-25 13:11:52.415466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.807 [2024-11-25 13:11:52.415478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.807 [2024-11-25 13:11:52.415487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.807 [2024-11-25 13:11:52.416952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.807 [2024-11-25 13:11:52.417026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.807 [2024-11-25 13:11:52.417105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.807 [2024-11-25 13:11:52.417108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.065 [2024-11-25 13:11:52.566203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.065 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.324 Malloc1 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.324 [2024-11-25 13:11:52.767719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:55.324 { 00:12:55.324 "name": "Malloc1", 00:12:55.324 "aliases": [ 00:12:55.324 "74c62877-e46c-4f2a-a81f-4801ccc67f96" 00:12:55.324 ], 00:12:55.324 "product_name": "Malloc disk", 00:12:55.324 "block_size": 512, 00:12:55.324 "num_blocks": 1048576, 00:12:55.324 "uuid": "74c62877-e46c-4f2a-a81f-4801ccc67f96", 00:12:55.324 "assigned_rate_limits": { 00:12:55.324 "rw_ios_per_sec": 0, 00:12:55.324 "rw_mbytes_per_sec": 0, 00:12:55.324 "r_mbytes_per_sec": 0, 00:12:55.324 "w_mbytes_per_sec": 0 00:12:55.324 }, 00:12:55.324 "claimed": true, 00:12:55.324 "claim_type": "exclusive_write", 00:12:55.324 "zoned": false, 00:12:55.324 "supported_io_types": { 00:12:55.324 "read": true, 00:12:55.324 "write": true, 00:12:55.324 "unmap": true, 00:12:55.324 "flush": true, 00:12:55.324 "reset": true, 00:12:55.324 "nvme_admin": false, 00:12:55.324 "nvme_io": false, 00:12:55.324 "nvme_io_md": false, 00:12:55.324 "write_zeroes": true, 00:12:55.324 "zcopy": true, 00:12:55.324 "get_zone_info": false, 00:12:55.324 "zone_management": false, 00:12:55.324 "zone_append": false, 00:12:55.324 "compare": false, 00:12:55.324 "compare_and_write": false, 00:12:55.324 "abort": true, 00:12:55.324 "seek_hole": false, 00:12:55.324 "seek_data": false, 00:12:55.324 "copy": true, 00:12:55.324 "nvme_iov_md": false 00:12:55.324 }, 00:12:55.324 "memory_domains": [ 00:12:55.324 { 00:12:55.324 "dma_device_id": "system", 00:12:55.324 "dma_device_type": 1 00:12:55.324 }, 00:12:55.324 { 00:12:55.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.324 "dma_device_type": 2 00:12:55.324 } 00:12:55.324 ], 00:12:55.324 "driver_specific": {} 00:12:55.324 } 00:12:55.324 ]' 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:55.324 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.259 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.259 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.259 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.259 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:56.259 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:58.157 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:58.723 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.701 ************************************ 00:12:59.701 START TEST filesystem_ext4 00:12:59.701 ************************************ 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:59.701 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:59.701 mke2fs 1.47.0 (5-Feb-2023) 00:12:59.701 Discarding device blocks: 0/522240 done 00:12:59.701 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:59.701 Filesystem UUID: 4c345285-f578-48fd-b5a1-51871d168641 00:12:59.701 Superblock backups stored on blocks: 00:12:59.701 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:59.701 00:12:59.701 Allocating group tables: 0/64 done 00:12:59.701 Writing inode tables: 0/64 done 00:12:59.958 Creating journal (8192 blocks): done 00:12:59.958 Writing superblocks and filesystem accounting information: 0/64 done 00:12:59.958 00:12:59.958 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:59.958 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:05.218 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:05.476 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:05.476 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:05.476 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:05.476 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:05.476 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3111629 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:05.476 00:13:05.476 real 0m5.835s 00:13:05.476 user 0m0.016s 00:13:05.476 sys 0m0.071s 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:05.476 ************************************ 00:13:05.476 END TEST filesystem_ext4 00:13:05.476 ************************************ 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:05.476 ************************************ 00:13:05.476 START TEST filesystem_btrfs 00:13:05.476 ************************************ 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:05.476 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:05.733 btrfs-progs v6.8.1 00:13:05.734 See https://btrfs.readthedocs.io for more information. 00:13:05.734 00:13:05.734 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:05.734 NOTE: several default settings have changed in version 5.15, please make sure 00:13:05.734 this does not affect your deployments: 00:13:05.734 - DUP for metadata (-m dup) 00:13:05.734 - enabled no-holes (-O no-holes) 00:13:05.734 - enabled free-space-tree (-R free-space-tree) 00:13:05.734 00:13:05.734 Label: (null) 00:13:05.734 UUID: 784ca065-ad07-46c2-b4e1-96402477206b 00:13:05.734 Node size: 16384 00:13:05.734 Sector size: 4096 (CPU page size: 4096) 00:13:05.734 Filesystem size: 510.00MiB 00:13:05.734 Block group profiles: 00:13:05.734 Data: single 8.00MiB 00:13:05.734 Metadata: DUP 32.00MiB 00:13:05.734 System: DUP 8.00MiB 00:13:05.734 SSD detected: yes 00:13:05.734 Zoned device: no 00:13:05.734 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:05.734 Checksum: crc32c 00:13:05.734 Number of devices: 1 00:13:05.734 Devices: 00:13:05.734 ID SIZE PATH 00:13:05.734 1 510.00MiB /dev/nvme0n1p1 00:13:05.734 00:13:05.734 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:05.734 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3111629 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:06.299 00:13:06.299 real 0m0.675s 00:13:06.299 user 0m0.019s 00:13:06.299 sys 0m0.099s 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:06.299 ************************************ 00:13:06.299 END TEST filesystem_btrfs 00:13:06.299 ************************************ 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.299 ************************************ 00:13:06.299 START TEST filesystem_xfs 00:13:06.299 ************************************ 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:06.299 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:06.299 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:06.299 = sectsz=512 attr=2, projid32bit=1 00:13:06.299 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:06.299 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:06.299 data = bsize=4096 blocks=130560, imaxpct=25 00:13:06.299 = sunit=0 swidth=0 blks 00:13:06.299 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:06.300 log =internal log bsize=4096 blocks=16384, version=2 00:13:06.300 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:06.300 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:07.231 Discarding blocks...Done. 00:13:07.231 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:07.231 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3111629 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:09.189 00:13:09.189 real 0m2.907s 00:13:09.189 user 0m0.014s 00:13:09.189 sys 0m0.062s 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:09.189 ************************************ 00:13:09.189 END TEST filesystem_xfs 00:13:09.189 ************************************ 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:09.189 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:09.446 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3111629 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3111629 ']' 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3111629 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111629 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111629' 00:13:09.447 killing process with pid 3111629 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3111629 00:13:09.447 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3111629 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:10.013 00:13:10.013 real 0m15.165s 00:13:10.013 user 0m58.611s 00:13:10.013 sys 0m2.010s 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.013 ************************************ 00:13:10.013 END TEST nvmf_filesystem_no_in_capsule 00:13:10.013 ************************************ 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:10.013 ************************************ 00:13:10.013 START TEST nvmf_filesystem_in_capsule 00:13:10.013 ************************************ 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3114070 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3114070 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3114070 ']' 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.013 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.013 [2024-11-25 13:12:07.490499] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:13:10.013 [2024-11-25 13:12:07.490578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.013 [2024-11-25 13:12:07.566452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.013 [2024-11-25 13:12:07.628346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.013 [2024-11-25 13:12:07.628396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.013 [2024-11-25 13:12:07.628432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.013 [2024-11-25 13:12:07.628445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.013 [2024-11-25 13:12:07.628455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.013 [2024-11-25 13:12:07.629947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.013 [2024-11-25 13:12:07.630016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.013 [2024-11-25 13:12:07.630076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.013 [2024-11-25 13:12:07.630079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.271 [2024-11-25 13:12:07.780360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.271 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.529 Malloc1 00:13:10.529 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.529 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.529 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.529 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.529 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.530 [2024-11-25 13:12:07.975106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:10.530 { 00:13:10.530 "name": "Malloc1", 00:13:10.530 "aliases": [ 00:13:10.530 "1c3b1973-d60c-46a0-be58-cd9475370f8f" 00:13:10.530 ], 00:13:10.530 "product_name": "Malloc disk", 00:13:10.530 "block_size": 512, 00:13:10.530 "num_blocks": 1048576, 00:13:10.530 "uuid": "1c3b1973-d60c-46a0-be58-cd9475370f8f", 00:13:10.530 "assigned_rate_limits": { 00:13:10.530 "rw_ios_per_sec": 0, 00:13:10.530 "rw_mbytes_per_sec": 0, 00:13:10.530 "r_mbytes_per_sec": 0, 00:13:10.530 "w_mbytes_per_sec": 0 00:13:10.530 }, 00:13:10.530 "claimed": true, 00:13:10.530 "claim_type": "exclusive_write", 00:13:10.530 "zoned": false, 00:13:10.530 "supported_io_types": { 00:13:10.530 "read": true, 00:13:10.530 "write": true, 00:13:10.530 "unmap": true, 00:13:10.530 "flush": true, 00:13:10.530 "reset": true, 00:13:10.530 "nvme_admin": false, 00:13:10.530 "nvme_io": false, 00:13:10.530 "nvme_io_md": false, 00:13:10.530 "write_zeroes": true, 00:13:10.530 "zcopy": true, 00:13:10.530 "get_zone_info": false, 00:13:10.530 "zone_management": false, 00:13:10.530 "zone_append": false, 00:13:10.530 "compare": false, 00:13:10.530 "compare_and_write": false, 00:13:10.530 "abort": true, 00:13:10.530 "seek_hole": false, 00:13:10.530 "seek_data": false, 00:13:10.530 "copy": true, 00:13:10.530 "nvme_iov_md": false 00:13:10.530 }, 00:13:10.530 "memory_domains": [ 00:13:10.530 { 00:13:10.530 "dma_device_id": "system", 00:13:10.530 "dma_device_type": 1 00:13:10.530 }, 00:13:10.530 { 00:13:10.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.530 "dma_device_type": 2 00:13:10.530 } 00:13:10.530 ], 00:13:10.530 "driver_specific": {} 00:13:10.530 } 00:13:10.530 ]' 00:13:10.530 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:10.530 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:10.530 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:10.530 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:10.530 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:10.530 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:10.530 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:10.530 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.097 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.097 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.097 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.097 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:11.097 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:13.623 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:13.880 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:14.813 ************************************ 00:13:14.813 START TEST filesystem_in_capsule_ext4 00:13:14.813 ************************************ 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:14.813 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:14.813 mke2fs 1.47.0 (5-Feb-2023) 00:13:14.813 Discarding device blocks: 0/522240 done 00:13:15.070 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:15.070 Filesystem UUID: f5d70eeb-3be6-4ee0-b1a0-a5cff545ecc2 00:13:15.070 Superblock backups stored on blocks: 00:13:15.070 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:15.070 00:13:15.070 Allocating group tables: 0/64 done 00:13:15.070 Writing inode tables: 0/64 done 00:13:15.635 Creating journal (8192 blocks): done 00:13:15.635 Writing superblocks and filesystem accounting information: 0/64 done 00:13:15.635 00:13:15.635 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:15.635 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3114070 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:20.893 00:13:20.893 real 0m6.205s 00:13:20.893 user 0m0.022s 00:13:20.893 sys 0m0.062s 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:20.893 ************************************ 00:13:20.893 END TEST filesystem_in_capsule_ext4 00:13:20.893 ************************************ 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.893 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.151 ************************************ 00:13:21.151 START TEST filesystem_in_capsule_btrfs 00:13:21.151 ************************************ 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:21.151 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:21.409 btrfs-progs v6.8.1 00:13:21.409 See https://btrfs.readthedocs.io for more information. 00:13:21.409 00:13:21.409 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:21.409 NOTE: several default settings have changed in version 5.15, please make sure 00:13:21.409 this does not affect your deployments: 00:13:21.409 - DUP for metadata (-m dup) 00:13:21.409 - enabled no-holes (-O no-holes) 00:13:21.410 - enabled free-space-tree (-R free-space-tree) 00:13:21.410 00:13:21.410 Label: (null) 00:13:21.410 UUID: 09dd3ad3-60bd-42f6-937e-1912693acc7c 00:13:21.410 Node size: 16384 00:13:21.410 Sector size: 4096 (CPU page size: 4096) 00:13:21.410 Filesystem size: 510.00MiB 00:13:21.410 Block group profiles: 00:13:21.410 Data: single 8.00MiB 00:13:21.410 Metadata: DUP 32.00MiB 00:13:21.410 System: DUP 8.00MiB 00:13:21.410 SSD detected: yes 00:13:21.410 Zoned device: no 00:13:21.410 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:21.410 Checksum: crc32c 00:13:21.410 Number of devices: 1 00:13:21.410 Devices: 00:13:21.410 ID SIZE PATH 00:13:21.410 1 510.00MiB /dev/nvme0n1p1 00:13:21.410 00:13:21.410 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:21.410 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:21.667 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:21.667 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:21.667 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:21.667 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:21.667 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:21.667 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3114070 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:21.925 00:13:21.925 real 0m0.790s 00:13:21.925 user 0m0.011s 00:13:21.925 sys 0m0.108s 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:21.925 ************************************ 00:13:21.925 END TEST filesystem_in_capsule_btrfs 00:13:21.925 ************************************ 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:21.925 ************************************ 00:13:21.925 START TEST filesystem_in_capsule_xfs 00:13:21.925 ************************************ 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:21.925 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:21.925 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:21.925 = sectsz=512 attr=2, projid32bit=1 00:13:21.925 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:21.925 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:21.925 data = bsize=4096 blocks=130560, imaxpct=25 00:13:21.925 = sunit=0 swidth=0 blks 00:13:21.925 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:21.925 log =internal log bsize=4096 blocks=16384, version=2 00:13:21.925 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:21.925 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:22.859 Discarding blocks...Done. 00:13:22.859 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:22.859 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3114070 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:25.387 00:13:25.387 real 0m3.478s 00:13:25.387 user 0m0.025s 00:13:25.387 sys 0m0.050s 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:25.387 ************************************ 00:13:25.387 END TEST filesystem_in_capsule_xfs 00:13:25.387 ************************************ 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:25.387 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.388 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.388 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:25.388 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:25.388 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3114070 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3114070 ']' 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3114070 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.388 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3114070 00:13:25.646 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.646 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.646 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3114070' 00:13:25.646 killing process with pid 3114070 00:13:25.646 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3114070 00:13:25.646 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3114070 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:25.905 00:13:25.905 real 0m16.061s 00:13:25.905 user 1m2.085s 00:13:25.905 sys 0m2.071s 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.905 ************************************ 00:13:25.905 END TEST nvmf_filesystem_in_capsule 00:13:25.905 ************************************ 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:25.905 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:25.905 rmmod nvme_tcp 00:13:25.905 rmmod nvme_fabrics 00:13:25.905 rmmod nvme_keyring 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.164 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.071 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:28.071 00:13:28.071 real 0m36.166s 00:13:28.071 user 2m1.786s 00:13:28.071 sys 0m5.952s 00:13:28.071 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.071 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:28.071 ************************************ 00:13:28.071 END TEST nvmf_filesystem 00:13:28.071 ************************************ 00:13:28.071 13:12:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:28.071 13:12:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:28.071 13:12:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.071 13:12:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.071 ************************************ 00:13:28.071 START TEST nvmf_target_discovery 00:13:28.071 ************************************ 00:13:28.071 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:28.330 * Looking for test storage... 00:13:28.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:28.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.330 --rc genhtml_branch_coverage=1 00:13:28.330 --rc genhtml_function_coverage=1 00:13:28.330 --rc genhtml_legend=1 00:13:28.330 --rc geninfo_all_blocks=1 00:13:28.330 --rc geninfo_unexecuted_blocks=1 00:13:28.330 00:13:28.330 ' 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:28.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.330 --rc genhtml_branch_coverage=1 00:13:28.330 --rc genhtml_function_coverage=1 00:13:28.330 --rc genhtml_legend=1 00:13:28.330 --rc geninfo_all_blocks=1 00:13:28.330 --rc geninfo_unexecuted_blocks=1 00:13:28.330 00:13:28.330 ' 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:28.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.330 --rc genhtml_branch_coverage=1 00:13:28.330 --rc genhtml_function_coverage=1 00:13:28.330 --rc genhtml_legend=1 00:13:28.330 --rc geninfo_all_blocks=1 00:13:28.330 --rc geninfo_unexecuted_blocks=1 00:13:28.330 00:13:28.330 ' 00:13:28.330 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:28.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.330 --rc genhtml_branch_coverage=1 00:13:28.330 --rc genhtml_function_coverage=1 00:13:28.331 --rc genhtml_legend=1 00:13:28.331 --rc geninfo_all_blocks=1 00:13:28.331 --rc geninfo_unexecuted_blocks=1 00:13:28.331 00:13:28.331 ' 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:28.331 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:30.906 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:30.906 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:30.906 Found net devices under 0000:09:00.0: cvl_0_0 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.906 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:30.907 Found net devices under 0000:09:00.1: cvl_0_1 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:13:30.907 00:13:30.907 --- 10.0.0.2 ping statistics --- 00:13:30.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.907 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:13:30.907 00:13:30.907 --- 10.0.0.1 ping statistics --- 00:13:30.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.907 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3118347 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3118347 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3118347 ']' 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.907 [2024-11-25 13:12:28.234180] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:13:30.907 [2024-11-25 13:12:28.234257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.907 [2024-11-25 13:12:28.308202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.907 [2024-11-25 13:12:28.367103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.907 [2024-11-25 13:12:28.367157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.907 [2024-11-25 13:12:28.367184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.907 [2024-11-25 13:12:28.367194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.907 [2024-11-25 13:12:28.367204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.907 [2024-11-25 13:12:28.368788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.907 [2024-11-25 13:12:28.368863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.907 [2024-11-25 13:12:28.368924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.907 [2024-11-25 13:12:28.368928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.907 [2024-11-25 13:12:28.522062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.907 Null1 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.907 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:30.908 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.908 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.908 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.908 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.908 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.908 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:30.908 [2024-11-25 13:12:28.562386] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.166 Null2 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.166 Null3 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:31.166 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 Null4 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:13:31.167 00:13:31.167 Discovery Log Number of Records 6, Generation counter 6 00:13:31.167 =====Discovery Log Entry 0====== 00:13:31.167 trtype: tcp 00:13:31.167 adrfam: ipv4 00:13:31.167 subtype: current discovery subsystem 00:13:31.167 treq: not required 00:13:31.167 portid: 0 00:13:31.167 trsvcid: 4420 00:13:31.167 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:31.167 traddr: 10.0.0.2 00:13:31.167 eflags: explicit discovery connections, duplicate discovery information 00:13:31.167 sectype: none 00:13:31.167 =====Discovery Log Entry 1====== 00:13:31.167 trtype: tcp 00:13:31.167 adrfam: ipv4 00:13:31.167 subtype: nvme subsystem 00:13:31.167 treq: not required 00:13:31.167 portid: 0 00:13:31.167 trsvcid: 4420 00:13:31.167 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:31.167 traddr: 10.0.0.2 00:13:31.167 eflags: none 00:13:31.167 sectype: none 00:13:31.167 =====Discovery Log Entry 2====== 00:13:31.167 trtype: tcp 00:13:31.167 adrfam: ipv4 00:13:31.167 subtype: nvme subsystem 00:13:31.167 treq: not required 00:13:31.167 portid: 0 00:13:31.167 trsvcid: 4420 00:13:31.167 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:31.167 traddr: 10.0.0.2 00:13:31.167 eflags: none 00:13:31.167 sectype: none 00:13:31.167 =====Discovery Log Entry 3====== 00:13:31.167 trtype: tcp 00:13:31.167 adrfam: ipv4 00:13:31.167 subtype: nvme subsystem 00:13:31.167 treq: not required 00:13:31.167 portid: 0 00:13:31.167 trsvcid: 4420 00:13:31.167 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:31.167 traddr: 10.0.0.2 00:13:31.167 eflags: none 00:13:31.167 sectype: none 00:13:31.167 =====Discovery Log Entry 4====== 00:13:31.167 trtype: tcp 00:13:31.167 adrfam: ipv4 00:13:31.167 subtype: nvme subsystem 00:13:31.167 treq: not required 00:13:31.167 portid: 0 00:13:31.167 trsvcid: 4420 00:13:31.167 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:31.167 traddr: 10.0.0.2 00:13:31.167 eflags: none 00:13:31.167 sectype: none 00:13:31.167 =====Discovery Log Entry 5====== 00:13:31.167 trtype: tcp 00:13:31.167 adrfam: ipv4 00:13:31.167 subtype: discovery subsystem referral 00:13:31.167 treq: not required 00:13:31.167 portid: 0 00:13:31.167 trsvcid: 4430 00:13:31.167 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:31.167 traddr: 10.0.0.2 00:13:31.167 eflags: none 00:13:31.167 sectype: none 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:31.167 Perform nvmf subsystem discovery via RPC 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.167 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.167 [ 00:13:31.167 { 00:13:31.167 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:31.167 "subtype": "Discovery", 00:13:31.167 "listen_addresses": [ 00:13:31.167 { 00:13:31.167 "trtype": "TCP", 00:13:31.167 "adrfam": "IPv4", 00:13:31.167 "traddr": "10.0.0.2", 00:13:31.167 "trsvcid": "4420" 00:13:31.167 } 00:13:31.167 ], 00:13:31.167 "allow_any_host": true, 00:13:31.167 "hosts": [] 00:13:31.167 }, 00:13:31.167 { 00:13:31.167 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.167 "subtype": "NVMe", 00:13:31.167 "listen_addresses": [ 00:13:31.167 { 00:13:31.167 "trtype": "TCP", 00:13:31.167 "adrfam": "IPv4", 00:13:31.167 "traddr": "10.0.0.2", 00:13:31.167 "trsvcid": "4420" 00:13:31.167 } 00:13:31.167 ], 00:13:31.167 "allow_any_host": true, 00:13:31.167 "hosts": [], 00:13:31.167 "serial_number": "SPDK00000000000001", 00:13:31.167 "model_number": "SPDK bdev Controller", 00:13:31.167 "max_namespaces": 32, 00:13:31.167 "min_cntlid": 1, 00:13:31.167 "max_cntlid": 65519, 00:13:31.167 "namespaces": [ 00:13:31.167 { 00:13:31.167 "nsid": 1, 00:13:31.167 "bdev_name": "Null1", 00:13:31.167 "name": "Null1", 00:13:31.167 "nguid": "E75691E3FD964144B1A75871883CBF89", 00:13:31.167 "uuid": "e75691e3-fd96-4144-b1a7-5871883cbf89" 00:13:31.167 } 00:13:31.167 ] 00:13:31.167 }, 00:13:31.167 { 00:13:31.167 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:31.167 "subtype": "NVMe", 00:13:31.167 "listen_addresses": [ 00:13:31.167 { 00:13:31.425 "trtype": "TCP", 00:13:31.425 "adrfam": "IPv4", 00:13:31.425 "traddr": "10.0.0.2", 00:13:31.425 "trsvcid": "4420" 00:13:31.425 } 00:13:31.425 ], 00:13:31.425 "allow_any_host": true, 00:13:31.425 "hosts": [], 00:13:31.425 "serial_number": "SPDK00000000000002", 00:13:31.425 "model_number": "SPDK bdev Controller", 00:13:31.425 "max_namespaces": 32, 00:13:31.425 "min_cntlid": 1, 00:13:31.425 "max_cntlid": 65519, 00:13:31.425 "namespaces": [ 00:13:31.425 { 00:13:31.425 "nsid": 1, 00:13:31.425 "bdev_name": "Null2", 00:13:31.425 "name": "Null2", 00:13:31.425 "nguid": "8F335BBA7B3643E28840797588A3B339", 00:13:31.425 "uuid": "8f335bba-7b36-43e2-8840-797588a3b339" 00:13:31.425 } 00:13:31.425 ] 00:13:31.425 }, 00:13:31.425 { 00:13:31.425 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:31.425 "subtype": "NVMe", 00:13:31.425 "listen_addresses": [ 00:13:31.425 { 00:13:31.425 "trtype": "TCP", 00:13:31.425 "adrfam": "IPv4", 00:13:31.425 "traddr": "10.0.0.2", 00:13:31.425 "trsvcid": "4420" 00:13:31.425 } 00:13:31.425 ], 00:13:31.425 "allow_any_host": true, 00:13:31.425 "hosts": [], 00:13:31.425 "serial_number": "SPDK00000000000003", 00:13:31.425 "model_number": "SPDK bdev Controller", 00:13:31.425 "max_namespaces": 32, 00:13:31.425 "min_cntlid": 1, 00:13:31.425 "max_cntlid": 65519, 00:13:31.425 "namespaces": [ 00:13:31.425 { 00:13:31.425 "nsid": 1, 00:13:31.425 "bdev_name": "Null3", 00:13:31.425 "name": "Null3", 00:13:31.425 "nguid": "CBB76BE276B849D1A31CFC80066FA939", 00:13:31.425 "uuid": "cbb76be2-76b8-49d1-a31c-fc80066fa939" 00:13:31.425 } 00:13:31.425 ] 00:13:31.425 }, 00:13:31.425 { 00:13:31.425 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:31.426 "subtype": "NVMe", 00:13:31.426 "listen_addresses": [ 00:13:31.426 { 00:13:31.426 "trtype": "TCP", 00:13:31.426 "adrfam": "IPv4", 00:13:31.426 "traddr": "10.0.0.2", 00:13:31.426 "trsvcid": "4420" 00:13:31.426 } 00:13:31.426 ], 00:13:31.426 "allow_any_host": true, 00:13:31.426 "hosts": [], 00:13:31.426 "serial_number": "SPDK00000000000004", 00:13:31.426 "model_number": "SPDK bdev Controller", 00:13:31.426 "max_namespaces": 32, 00:13:31.426 "min_cntlid": 1, 00:13:31.426 "max_cntlid": 65519, 00:13:31.426 "namespaces": [ 00:13:31.426 { 00:13:31.426 "nsid": 1, 00:13:31.426 "bdev_name": "Null4", 00:13:31.426 "name": "Null4", 00:13:31.426 "nguid": "595D15950AAB4FA4B80E6FF9F9BE753B", 00:13:31.426 "uuid": "595d1595-0aab-4fa4-b80e-6ff9f9be753b" 00:13:31.426 } 00:13:31.426 ] 00:13:31.426 } 00:13:31.426 ] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:31.426 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:31.426 rmmod nvme_tcp 00:13:31.426 rmmod nvme_fabrics 00:13:31.426 rmmod nvme_keyring 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3118347 ']' 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3118347 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3118347 ']' 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3118347 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3118347 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3118347' 00:13:31.426 killing process with pid 3118347 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3118347 00:13:31.426 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3118347 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.690 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:34.225 00:13:34.225 real 0m5.660s 00:13:34.225 user 0m4.600s 00:13:34.225 sys 0m2.044s 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:34.225 ************************************ 00:13:34.225 END TEST nvmf_target_discovery 00:13:34.225 ************************************ 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.225 ************************************ 00:13:34.225 START TEST nvmf_referrals 00:13:34.225 ************************************ 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:34.225 * Looking for test storage... 00:13:34.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:34.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.225 --rc genhtml_branch_coverage=1 00:13:34.225 --rc genhtml_function_coverage=1 00:13:34.225 --rc genhtml_legend=1 00:13:34.225 --rc geninfo_all_blocks=1 00:13:34.225 --rc geninfo_unexecuted_blocks=1 00:13:34.225 00:13:34.225 ' 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:34.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.225 --rc genhtml_branch_coverage=1 00:13:34.225 --rc genhtml_function_coverage=1 00:13:34.225 --rc genhtml_legend=1 00:13:34.225 --rc geninfo_all_blocks=1 00:13:34.225 --rc geninfo_unexecuted_blocks=1 00:13:34.225 00:13:34.225 ' 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:34.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.225 --rc genhtml_branch_coverage=1 00:13:34.225 --rc genhtml_function_coverage=1 00:13:34.225 --rc genhtml_legend=1 00:13:34.225 --rc geninfo_all_blocks=1 00:13:34.225 --rc geninfo_unexecuted_blocks=1 00:13:34.225 00:13:34.225 ' 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:34.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.225 --rc genhtml_branch_coverage=1 00:13:34.225 --rc genhtml_function_coverage=1 00:13:34.225 --rc genhtml_legend=1 00:13:34.225 --rc geninfo_all_blocks=1 00:13:34.225 --rc geninfo_unexecuted_blocks=1 00:13:34.225 00:13:34.225 ' 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.225 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:34.226 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:36.127 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.127 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:36.128 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:36.128 Found net devices under 0000:09:00.0: cvl_0_0 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:36.128 Found net devices under 0000:09:00.1: cvl_0_1 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.128 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.386 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.386 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.386 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:36.386 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.386 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.386 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.386 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:36.386 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:36.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:13:36.387 00:13:36.387 --- 10.0.0.2 ping statistics --- 00:13:36.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.387 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:13:36.387 00:13:36.387 --- 10.0.0.1 ping statistics --- 00:13:36.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.387 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3120448 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3120448 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3120448 ']' 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:36.387 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.387 [2024-11-25 13:12:33.938728] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:13:36.387 [2024-11-25 13:12:33.938821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.387 [2024-11-25 13:12:34.018173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.645 [2024-11-25 13:12:34.078998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.645 [2024-11-25 13:12:34.079049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.645 [2024-11-25 13:12:34.079077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.645 [2024-11-25 13:12:34.079089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.645 [2024-11-25 13:12:34.079098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.645 [2024-11-25 13:12:34.080732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.645 [2024-11-25 13:12:34.080786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.645 [2024-11-25 13:12:34.080854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.645 [2024-11-25 13:12:34.080857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.645 [2024-11-25 13:12:34.229223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.645 [2024-11-25 13:12:34.241488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.645 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.646 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:36.903 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:36.904 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.161 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:37.161 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:37.161 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.161 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.161 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:37.161 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.161 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.162 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.419 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:37.419 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:37.419 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:37.419 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:37.419 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:37.419 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:37.419 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:37.419 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:37.677 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:37.677 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:37.677 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:37.677 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:37.677 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:37.677 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:37.935 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:38.193 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:38.450 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.450 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:38.450 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:38.450 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:38.450 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:38.450 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:38.450 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:38.450 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:38.708 rmmod nvme_tcp 00:13:38.708 rmmod nvme_fabrics 00:13:38.708 rmmod nvme_keyring 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3120448 ']' 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3120448 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3120448 ']' 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3120448 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3120448 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3120448' 00:13:38.708 killing process with pid 3120448 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3120448 00:13:38.708 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3120448 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.967 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:41.540 00:13:41.540 real 0m7.200s 00:13:41.540 user 0m11.364s 00:13:41.540 sys 0m2.391s 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 ************************************ 00:13:41.540 END TEST nvmf_referrals 00:13:41.540 ************************************ 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:41.540 ************************************ 00:13:41.540 START TEST nvmf_connect_disconnect 00:13:41.540 ************************************ 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:41.540 * Looking for test storage... 00:13:41.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.540 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:41.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.540 --rc genhtml_branch_coverage=1 00:13:41.540 --rc genhtml_function_coverage=1 00:13:41.540 --rc genhtml_legend=1 00:13:41.541 --rc geninfo_all_blocks=1 00:13:41.541 --rc geninfo_unexecuted_blocks=1 00:13:41.541 00:13:41.541 ' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:41.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.541 --rc genhtml_branch_coverage=1 00:13:41.541 --rc genhtml_function_coverage=1 00:13:41.541 --rc genhtml_legend=1 00:13:41.541 --rc geninfo_all_blocks=1 00:13:41.541 --rc geninfo_unexecuted_blocks=1 00:13:41.541 00:13:41.541 ' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:41.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.541 --rc genhtml_branch_coverage=1 00:13:41.541 --rc genhtml_function_coverage=1 00:13:41.541 --rc genhtml_legend=1 00:13:41.541 --rc geninfo_all_blocks=1 00:13:41.541 --rc geninfo_unexecuted_blocks=1 00:13:41.541 00:13:41.541 ' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:41.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.541 --rc genhtml_branch_coverage=1 00:13:41.541 --rc genhtml_function_coverage=1 00:13:41.541 --rc genhtml_legend=1 00:13:41.541 --rc geninfo_all_blocks=1 00:13:41.541 --rc geninfo_unexecuted_blocks=1 00:13:41.541 00:13:41.541 ' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:41.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:41.541 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.447 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.447 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.447 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:43.448 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:43.448 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:43.448 Found net devices under 0000:09:00.0: cvl_0_0 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:43.448 Found net devices under 0000:09:00.1: cvl_0_1 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.448 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.448 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.448 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.448 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.448 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.448 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.448 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:13:43.448 00:13:43.448 --- 10.0.0.2 ping statistics --- 00:13:43.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.448 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:43.448 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:13:43.448 00:13:43.449 --- 10.0.0.1 ping statistics --- 00:13:43.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.449 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3122756 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3122756 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3122756 ']' 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.449 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.707 [2024-11-25 13:12:41.139557] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:13:43.707 [2024-11-25 13:12:41.139649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.707 [2024-11-25 13:12:41.209494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.707 [2024-11-25 13:12:41.265266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.707 [2024-11-25 13:12:41.265339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.707 [2024-11-25 13:12:41.265369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.707 [2024-11-25 13:12:41.265380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.707 [2024-11-25 13:12:41.265389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.707 [2024-11-25 13:12:41.266834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.707 [2024-11-25 13:12:41.266899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.707 [2024-11-25 13:12:41.267011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.707 [2024-11-25 13:12:41.267008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.966 [2024-11-25 13:12:41.406312] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:43.966 [2024-11-25 13:12:41.472604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:43.966 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:47.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:58.107 rmmod nvme_tcp 00:13:58.107 rmmod nvme_fabrics 00:13:58.107 rmmod nvme_keyring 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3122756 ']' 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3122756 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3122756 ']' 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3122756 00:13:58.107 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3122756 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3122756' 00:13:58.108 killing process with pid 3122756 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3122756 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3122756 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.108 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.644 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:00.644 00:14:00.644 real 0m19.092s 00:14:00.644 user 0m57.253s 00:14:00.644 sys 0m3.445s 00:14:00.644 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:00.645 ************************************ 00:14:00.645 END TEST nvmf_connect_disconnect 00:14:00.645 ************************************ 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.645 ************************************ 00:14:00.645 START TEST nvmf_multitarget 00:14:00.645 ************************************ 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:00.645 * Looking for test storage... 00:14:00.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:00.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.645 --rc genhtml_branch_coverage=1 00:14:00.645 --rc genhtml_function_coverage=1 00:14:00.645 --rc genhtml_legend=1 00:14:00.645 --rc geninfo_all_blocks=1 00:14:00.645 --rc geninfo_unexecuted_blocks=1 00:14:00.645 00:14:00.645 ' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:00.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.645 --rc genhtml_branch_coverage=1 00:14:00.645 --rc genhtml_function_coverage=1 00:14:00.645 --rc genhtml_legend=1 00:14:00.645 --rc geninfo_all_blocks=1 00:14:00.645 --rc geninfo_unexecuted_blocks=1 00:14:00.645 00:14:00.645 ' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:00.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.645 --rc genhtml_branch_coverage=1 00:14:00.645 --rc genhtml_function_coverage=1 00:14:00.645 --rc genhtml_legend=1 00:14:00.645 --rc geninfo_all_blocks=1 00:14:00.645 --rc geninfo_unexecuted_blocks=1 00:14:00.645 00:14:00.645 ' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:00.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.645 --rc genhtml_branch_coverage=1 00:14:00.645 --rc genhtml_function_coverage=1 00:14:00.645 --rc genhtml_legend=1 00:14:00.645 --rc geninfo_all_blocks=1 00:14:00.645 --rc geninfo_unexecuted_blocks=1 00:14:00.645 00:14:00.645 ' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.645 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:00.646 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:02.548 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:02.548 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:02.548 Found net devices under 0000:09:00.0: cvl_0_0 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:02.548 Found net devices under 0000:09:00.1: cvl_0_1 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.548 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.549 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:14:02.807 00:14:02.807 --- 10.0.0.2 ping statistics --- 00:14:02.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.807 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:14:02.807 00:14:02.807 --- 10.0.0.1 ping statistics --- 00:14:02.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.807 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:14:02.807 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3126528 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3126528 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3126528 ']' 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.808 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:02.808 [2024-11-25 13:13:00.372428] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:14:02.808 [2024-11-25 13:13:00.372505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.808 [2024-11-25 13:13:00.446726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.066 [2024-11-25 13:13:00.506225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.066 [2024-11-25 13:13:00.506284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.066 [2024-11-25 13:13:00.506329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.066 [2024-11-25 13:13:00.506342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.066 [2024-11-25 13:13:00.506366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.066 [2024-11-25 13:13:00.508014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.066 [2024-11-25 13:13:00.508122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.066 [2024-11-25 13:13:00.508229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.066 [2024-11-25 13:13:00.508238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:03.066 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:03.324 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:03.324 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:03.324 "nvmf_tgt_1" 00:14:03.324 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:03.582 "nvmf_tgt_2" 00:14:03.582 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:03.582 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:03.582 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:03.582 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:03.582 true 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:03.840 true 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.840 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:04.098 rmmod nvme_tcp 00:14:04.098 rmmod nvme_fabrics 00:14:04.098 rmmod nvme_keyring 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3126528 ']' 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3126528 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3126528 ']' 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3126528 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3126528 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3126528' 00:14:04.098 killing process with pid 3126528 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3126528 00:14:04.098 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3126528 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.357 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.265 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:06.265 00:14:06.265 real 0m6.090s 00:14:06.265 user 0m6.929s 00:14:06.265 sys 0m2.139s 00:14:06.265 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.265 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:06.265 ************************************ 00:14:06.265 END TEST nvmf_multitarget 00:14:06.265 ************************************ 00:14:06.265 13:13:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:06.265 13:13:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:06.265 13:13:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.265 13:13:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.523 ************************************ 00:14:06.523 START TEST nvmf_rpc 00:14:06.523 ************************************ 00:14:06.523 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:06.524 * Looking for test storage... 00:14:06.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.524 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.524 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.524 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.524 --rc genhtml_branch_coverage=1 00:14:06.524 --rc genhtml_function_coverage=1 00:14:06.524 --rc genhtml_legend=1 00:14:06.524 --rc geninfo_all_blocks=1 00:14:06.524 --rc geninfo_unexecuted_blocks=1 00:14:06.524 00:14:06.524 ' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.524 --rc genhtml_branch_coverage=1 00:14:06.524 --rc genhtml_function_coverage=1 00:14:06.524 --rc genhtml_legend=1 00:14:06.524 --rc geninfo_all_blocks=1 00:14:06.524 --rc geninfo_unexecuted_blocks=1 00:14:06.524 00:14:06.524 ' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.524 --rc genhtml_branch_coverage=1 00:14:06.524 --rc genhtml_function_coverage=1 00:14:06.524 --rc genhtml_legend=1 00:14:06.524 --rc geninfo_all_blocks=1 00:14:06.524 --rc geninfo_unexecuted_blocks=1 00:14:06.524 00:14:06.524 ' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.524 --rc genhtml_branch_coverage=1 00:14:06.524 --rc genhtml_function_coverage=1 00:14:06.524 --rc genhtml_legend=1 00:14:06.524 --rc geninfo_all_blocks=1 00:14:06.524 --rc geninfo_unexecuted_blocks=1 00:14:06.524 00:14:06.524 ' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.524 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.525 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:09.056 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:09.056 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:09.056 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:09.057 Found net devices under 0000:09:00.0: cvl_0_0 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:09.057 Found net devices under 0000:09:00.1: cvl_0_1 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:09.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:14:09.057 00:14:09.057 --- 10.0.0.2 ping statistics --- 00:14:09.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.057 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:14:09.057 00:14:09.057 --- 10.0.0.1 ping statistics --- 00:14:09.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.057 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3128634 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3128634 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3128634 ']' 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.057 [2024-11-25 13:13:06.371871] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:14:09.057 [2024-11-25 13:13:06.371979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.057 [2024-11-25 13:13:06.441550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.057 [2024-11-25 13:13:06.496471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.057 [2024-11-25 13:13:06.496526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.057 [2024-11-25 13:13:06.496550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.057 [2024-11-25 13:13:06.496560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.057 [2024-11-25 13:13:06.496568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.057 [2024-11-25 13:13:06.498163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.057 [2024-11-25 13:13:06.498270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.057 [2024-11-25 13:13:06.498360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.057 [2024-11-25 13:13:06.498364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.057 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:09.058 "tick_rate": 2700000000, 00:14:09.058 "poll_groups": [ 00:14:09.058 { 00:14:09.058 "name": "nvmf_tgt_poll_group_000", 00:14:09.058 "admin_qpairs": 0, 00:14:09.058 "io_qpairs": 0, 00:14:09.058 "current_admin_qpairs": 0, 00:14:09.058 "current_io_qpairs": 0, 00:14:09.058 "pending_bdev_io": 0, 00:14:09.058 "completed_nvme_io": 0, 00:14:09.058 "transports": [] 00:14:09.058 }, 00:14:09.058 { 00:14:09.058 "name": "nvmf_tgt_poll_group_001", 00:14:09.058 "admin_qpairs": 0, 00:14:09.058 "io_qpairs": 0, 00:14:09.058 "current_admin_qpairs": 0, 00:14:09.058 "current_io_qpairs": 0, 00:14:09.058 "pending_bdev_io": 0, 00:14:09.058 "completed_nvme_io": 0, 00:14:09.058 "transports": [] 00:14:09.058 }, 00:14:09.058 { 00:14:09.058 "name": "nvmf_tgt_poll_group_002", 00:14:09.058 "admin_qpairs": 0, 00:14:09.058 "io_qpairs": 0, 00:14:09.058 "current_admin_qpairs": 0, 00:14:09.058 "current_io_qpairs": 0, 00:14:09.058 "pending_bdev_io": 0, 00:14:09.058 "completed_nvme_io": 0, 00:14:09.058 "transports": [] 00:14:09.058 }, 00:14:09.058 { 00:14:09.058 "name": "nvmf_tgt_poll_group_003", 00:14:09.058 "admin_qpairs": 0, 00:14:09.058 "io_qpairs": 0, 00:14:09.058 "current_admin_qpairs": 0, 00:14:09.058 "current_io_qpairs": 0, 00:14:09.058 "pending_bdev_io": 0, 00:14:09.058 "completed_nvme_io": 0, 00:14:09.058 "transports": [] 00:14:09.058 } 00:14:09.058 ] 00:14:09.058 }' 00:14:09.058 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:09.058 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:09.058 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:09.058 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:09.058 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:09.058 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.316 [2024-11-25 13:13:06.721916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.316 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:09.316 "tick_rate": 2700000000, 00:14:09.316 "poll_groups": [ 00:14:09.316 { 00:14:09.316 "name": "nvmf_tgt_poll_group_000", 00:14:09.316 "admin_qpairs": 0, 00:14:09.316 "io_qpairs": 0, 00:14:09.316 "current_admin_qpairs": 0, 00:14:09.316 "current_io_qpairs": 0, 00:14:09.316 "pending_bdev_io": 0, 00:14:09.316 "completed_nvme_io": 0, 00:14:09.316 "transports": [ 00:14:09.316 { 00:14:09.316 "trtype": "TCP" 00:14:09.316 } 00:14:09.316 ] 00:14:09.316 }, 00:14:09.316 { 00:14:09.316 "name": "nvmf_tgt_poll_group_001", 00:14:09.316 "admin_qpairs": 0, 00:14:09.316 "io_qpairs": 0, 00:14:09.316 "current_admin_qpairs": 0, 00:14:09.316 "current_io_qpairs": 0, 00:14:09.316 "pending_bdev_io": 0, 00:14:09.316 "completed_nvme_io": 0, 00:14:09.316 "transports": [ 00:14:09.316 { 00:14:09.316 "trtype": "TCP" 00:14:09.316 } 00:14:09.316 ] 00:14:09.316 }, 00:14:09.316 { 00:14:09.316 "name": "nvmf_tgt_poll_group_002", 00:14:09.316 "admin_qpairs": 0, 00:14:09.316 "io_qpairs": 0, 00:14:09.317 "current_admin_qpairs": 0, 00:14:09.317 "current_io_qpairs": 0, 00:14:09.317 "pending_bdev_io": 0, 00:14:09.317 "completed_nvme_io": 0, 00:14:09.317 "transports": [ 00:14:09.317 { 00:14:09.317 "trtype": "TCP" 00:14:09.317 } 00:14:09.317 ] 00:14:09.317 }, 00:14:09.317 { 00:14:09.317 "name": "nvmf_tgt_poll_group_003", 00:14:09.317 "admin_qpairs": 0, 00:14:09.317 "io_qpairs": 0, 00:14:09.317 "current_admin_qpairs": 0, 00:14:09.317 "current_io_qpairs": 0, 00:14:09.317 "pending_bdev_io": 0, 00:14:09.317 "completed_nvme_io": 0, 00:14:09.317 "transports": [ 00:14:09.317 { 00:14:09.317 "trtype": "TCP" 00:14:09.317 } 00:14:09.317 ] 00:14:09.317 } 00:14:09.317 ] 00:14:09.317 }' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.317 Malloc1 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.317 [2024-11-25 13:13:06.881570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:09.317 [2024-11-25 13:13:06.904171] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:14:09.317 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:09.317 could not add new controller: failed to write to nvme-fabrics device 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.317 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.252 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.252 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:10.252 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.252 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:10.252 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.151 [2024-11-25 13:13:09.698998] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:14:12.151 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:12.151 could not add new controller: failed to write to nvme-fabrics device 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.151 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.714 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.714 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:12.714 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.714 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:12.714 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.288 [2024-11-25 13:13:12.463499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.288 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:15.546 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:15.546 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:15.546 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.546 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:15.546 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:17.443 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:17.443 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:17.443 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.701 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.702 [2024-11-25 13:13:15.209347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.702 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.268 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.268 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:18.268 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.268 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:18.268 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.797 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.797 [2024-11-25 13:13:18.001098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.797 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.055 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.055 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:21.055 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.055 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:21.055 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.583 [2024-11-25 13:13:20.776773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.583 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.841 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:23.841 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:23.841 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.841 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:23.841 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.370 [2024-11-25 13:13:23.569363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.370 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:26.628 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:26.628 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:26.628 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.628 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:26.628 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 [2024-11-25 13:13:26.407569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 [2024-11-25 13:13:26.455612] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 [2024-11-25 13:13:26.503785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.156 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 [2024-11-25 13:13:26.551935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 [2024-11-25 13:13:26.600102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:29.157 "tick_rate": 2700000000, 00:14:29.157 "poll_groups": [ 00:14:29.157 { 00:14:29.157 "name": "nvmf_tgt_poll_group_000", 00:14:29.157 "admin_qpairs": 2, 00:14:29.157 "io_qpairs": 84, 00:14:29.157 "current_admin_qpairs": 0, 00:14:29.157 "current_io_qpairs": 0, 00:14:29.157 "pending_bdev_io": 0, 00:14:29.157 "completed_nvme_io": 184, 00:14:29.157 "transports": [ 00:14:29.157 { 00:14:29.157 "trtype": "TCP" 00:14:29.157 } 00:14:29.157 ] 00:14:29.157 }, 00:14:29.157 { 00:14:29.157 "name": "nvmf_tgt_poll_group_001", 00:14:29.157 "admin_qpairs": 2, 00:14:29.157 "io_qpairs": 84, 00:14:29.157 "current_admin_qpairs": 0, 00:14:29.157 "current_io_qpairs": 0, 00:14:29.157 "pending_bdev_io": 0, 00:14:29.157 "completed_nvme_io": 233, 00:14:29.157 "transports": [ 00:14:29.157 { 00:14:29.157 "trtype": "TCP" 00:14:29.157 } 00:14:29.157 ] 00:14:29.157 }, 00:14:29.157 { 00:14:29.157 "name": "nvmf_tgt_poll_group_002", 00:14:29.157 "admin_qpairs": 1, 00:14:29.157 "io_qpairs": 84, 00:14:29.157 "current_admin_qpairs": 0, 00:14:29.157 "current_io_qpairs": 0, 00:14:29.157 "pending_bdev_io": 0, 00:14:29.157 "completed_nvme_io": 134, 00:14:29.157 "transports": [ 00:14:29.157 { 00:14:29.157 "trtype": "TCP" 00:14:29.157 } 00:14:29.157 ] 00:14:29.157 }, 00:14:29.157 { 00:14:29.157 "name": "nvmf_tgt_poll_group_003", 00:14:29.157 "admin_qpairs": 2, 00:14:29.157 "io_qpairs": 84, 00:14:29.157 "current_admin_qpairs": 0, 00:14:29.157 "current_io_qpairs": 0, 00:14:29.157 "pending_bdev_io": 0, 00:14:29.157 "completed_nvme_io": 135, 00:14:29.157 "transports": [ 00:14:29.157 { 00:14:29.157 "trtype": "TCP" 00:14:29.157 } 00:14:29.157 ] 00:14:29.157 } 00:14:29.157 ] 00:14:29.157 }' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:29.157 rmmod nvme_tcp 00:14:29.157 rmmod nvme_fabrics 00:14:29.157 rmmod nvme_keyring 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3128634 ']' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3128634 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3128634 ']' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3128634 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.157 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128634 00:14:29.416 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.416 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.416 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128634' 00:14:29.416 killing process with pid 3128634 00:14:29.416 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3128634 00:14:29.416 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3128634 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.416 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:31.954 00:14:31.954 real 0m25.178s 00:14:31.954 user 1m21.710s 00:14:31.954 sys 0m4.110s 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:31.954 ************************************ 00:14:31.954 END TEST nvmf_rpc 00:14:31.954 ************************************ 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.954 ************************************ 00:14:31.954 START TEST nvmf_invalid 00:14:31.954 ************************************ 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:31.954 * Looking for test storage... 00:14:31.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:31.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.954 --rc genhtml_branch_coverage=1 00:14:31.954 --rc genhtml_function_coverage=1 00:14:31.954 --rc genhtml_legend=1 00:14:31.954 --rc geninfo_all_blocks=1 00:14:31.954 --rc geninfo_unexecuted_blocks=1 00:14:31.954 00:14:31.954 ' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:31.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.954 --rc genhtml_branch_coverage=1 00:14:31.954 --rc genhtml_function_coverage=1 00:14:31.954 --rc genhtml_legend=1 00:14:31.954 --rc geninfo_all_blocks=1 00:14:31.954 --rc geninfo_unexecuted_blocks=1 00:14:31.954 00:14:31.954 ' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:31.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.954 --rc genhtml_branch_coverage=1 00:14:31.954 --rc genhtml_function_coverage=1 00:14:31.954 --rc genhtml_legend=1 00:14:31.954 --rc geninfo_all_blocks=1 00:14:31.954 --rc geninfo_unexecuted_blocks=1 00:14:31.954 00:14:31.954 ' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:31.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.954 --rc genhtml_branch_coverage=1 00:14:31.954 --rc genhtml_function_coverage=1 00:14:31.954 --rc genhtml_legend=1 00:14:31.954 --rc geninfo_all_blocks=1 00:14:31.954 --rc geninfo_unexecuted_blocks=1 00:14:31.954 00:14:31.954 ' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.954 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.955 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:34.487 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:34.487 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.487 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:34.488 Found net devices under 0000:09:00.0: cvl_0_0 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:34.488 Found net devices under 0000:09:00.1: cvl_0_1 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:34.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:14:34.488 00:14:34.488 --- 10.0.0.2 ping statistics --- 00:14:34.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.488 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:14:34.488 00:14:34.488 --- 10.0.0.1 ping statistics --- 00:14:34.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.488 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3133199 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3133199 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3133199 ']' 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.488 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:34.488 [2024-11-25 13:13:31.782500] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:14:34.488 [2024-11-25 13:13:31.782568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.488 [2024-11-25 13:13:31.851180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:34.488 [2024-11-25 13:13:31.906723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.488 [2024-11-25 13:13:31.906778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.488 [2024-11-25 13:13:31.906801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.488 [2024-11-25 13:13:31.906812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.488 [2024-11-25 13:13:31.906836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.488 [2024-11-25 13:13:31.908447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.488 [2024-11-25 13:13:31.908498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.488 [2024-11-25 13:13:31.908547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.488 [2024-11-25 13:13:31.908551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.488 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.488 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:34.488 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:34.488 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:34.488 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:34.488 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.488 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:34.488 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25676 00:14:34.746 [2024-11-25 13:13:32.332749] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:34.746 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:34.746 { 00:14:34.746 "nqn": "nqn.2016-06.io.spdk:cnode25676", 00:14:34.746 "tgt_name": "foobar", 00:14:34.746 "method": "nvmf_create_subsystem", 00:14:34.746 "req_id": 1 00:14:34.746 } 00:14:34.746 Got JSON-RPC error response 00:14:34.746 response: 00:14:34.746 { 00:14:34.746 "code": -32603, 00:14:34.746 "message": "Unable to find target foobar" 00:14:34.746 }' 00:14:34.746 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:34.746 { 00:14:34.746 "nqn": "nqn.2016-06.io.spdk:cnode25676", 00:14:34.746 "tgt_name": "foobar", 00:14:34.746 "method": "nvmf_create_subsystem", 00:14:34.746 "req_id": 1 00:14:34.746 } 00:14:34.746 Got JSON-RPC error response 00:14:34.746 response: 00:14:34.746 { 00:14:34.746 "code": -32603, 00:14:34.746 "message": "Unable to find target foobar" 00:14:34.746 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:34.746 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:34.746 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7541 00:14:35.005 [2024-11-25 13:13:32.605702] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7541: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:35.006 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:35.006 { 00:14:35.006 "nqn": "nqn.2016-06.io.spdk:cnode7541", 00:14:35.006 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:35.006 "method": "nvmf_create_subsystem", 00:14:35.006 "req_id": 1 00:14:35.006 } 00:14:35.006 Got JSON-RPC error response 00:14:35.006 response: 00:14:35.006 { 00:14:35.006 "code": -32602, 00:14:35.006 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:35.006 }' 00:14:35.006 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:35.006 { 00:14:35.006 "nqn": "nqn.2016-06.io.spdk:cnode7541", 00:14:35.006 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:35.006 "method": "nvmf_create_subsystem", 00:14:35.006 "req_id": 1 00:14:35.006 } 00:14:35.006 Got JSON-RPC error response 00:14:35.006 response: 00:14:35.006 { 00:14:35.006 "code": -32602, 00:14:35.006 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:35.006 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:35.006 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:35.006 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4544 00:14:35.264 [2024-11-25 13:13:32.890661] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4544: invalid model number 'SPDK_Controller' 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:35.264 { 00:14:35.264 "nqn": "nqn.2016-06.io.spdk:cnode4544", 00:14:35.264 "model_number": "SPDK_Controller\u001f", 00:14:35.264 "method": "nvmf_create_subsystem", 00:14:35.264 "req_id": 1 00:14:35.264 } 00:14:35.264 Got JSON-RPC error response 00:14:35.264 response: 00:14:35.264 { 00:14:35.264 "code": -32602, 00:14:35.264 "message": "Invalid MN SPDK_Controller\u001f" 00:14:35.264 }' 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:35.264 { 00:14:35.264 "nqn": "nqn.2016-06.io.spdk:cnode4544", 00:14:35.264 "model_number": "SPDK_Controller\u001f", 00:14:35.264 "method": "nvmf_create_subsystem", 00:14:35.264 "req_id": 1 00:14:35.264 } 00:14:35.264 Got JSON-RPC error response 00:14:35.264 response: 00:14:35.264 { 00:14:35.264 "code": -32602, 00:14:35.264 "message": "Invalid MN SPDK_Controller\u001f" 00:14:35.264 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.264 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.522 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'K-8Xz{w-EkeB?*;g(kY;[' 00:14:35.523 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'K-8Xz{w-EkeB?*;g(kY;[' nqn.2016-06.io.spdk:cnode29588 00:14:35.780 [2024-11-25 13:13:33.235824] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29588: invalid serial number 'K-8Xz{w-EkeB?*;g(kY;[' 00:14:35.780 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:35.780 { 00:14:35.780 "nqn": "nqn.2016-06.io.spdk:cnode29588", 00:14:35.780 "serial_number": "K-8Xz{w-EkeB?*;g(kY;[", 00:14:35.780 "method": "nvmf_create_subsystem", 00:14:35.780 "req_id": 1 00:14:35.780 } 00:14:35.780 Got JSON-RPC error response 00:14:35.780 response: 00:14:35.780 { 00:14:35.780 "code": -32602, 00:14:35.780 "message": "Invalid SN K-8Xz{w-EkeB?*;g(kY;[" 00:14:35.780 }' 00:14:35.780 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:35.780 { 00:14:35.780 "nqn": "nqn.2016-06.io.spdk:cnode29588", 00:14:35.780 "serial_number": "K-8Xz{w-EkeB?*;g(kY;[", 00:14:35.780 "method": "nvmf_create_subsystem", 00:14:35.780 "req_id": 1 00:14:35.780 } 00:14:35.780 Got JSON-RPC error response 00:14:35.780 response: 00:14:35.780 { 00:14:35.780 "code": -32602, 00:14:35.780 "message": "Invalid SN K-8Xz{w-EkeB?*;g(kY;[" 00:14:35.780 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:35.780 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:35.780 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:35.780 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.781 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Q/GJk cl;GX+WJ"FP{n{8.G+}uQD0L:-- DJ}Z^wc' 00:14:35.782 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Q/GJk cl;GX+WJ"FP{n{8.G+}uQD0L:-- DJ}Z^wc' nqn.2016-06.io.spdk:cnode7736 00:14:36.043 [2024-11-25 13:13:33.669242] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7736: invalid model number 'Q/GJk cl;GX+WJ"FP{n{8.G+}uQD0L:-- DJ}Z^wc' 00:14:36.043 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:36.043 { 00:14:36.043 "nqn": "nqn.2016-06.io.spdk:cnode7736", 00:14:36.043 "model_number": "Q/GJk cl;GX+WJ\"FP{n{8.G+}uQD0L:-- DJ}Z^wc", 00:14:36.043 "method": "nvmf_create_subsystem", 00:14:36.043 "req_id": 1 00:14:36.043 } 00:14:36.043 Got JSON-RPC error response 00:14:36.044 response: 00:14:36.044 { 00:14:36.044 "code": -32602, 00:14:36.044 "message": "Invalid MN Q/GJk cl;GX+WJ\"FP{n{8.G+}uQD0L:-- DJ}Z^wc" 00:14:36.044 }' 00:14:36.044 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:36.044 { 00:14:36.044 "nqn": "nqn.2016-06.io.spdk:cnode7736", 00:14:36.044 "model_number": "Q/GJk cl;GX+WJ\"FP{n{8.G+}uQD0L:-- DJ}Z^wc", 00:14:36.044 "method": "nvmf_create_subsystem", 00:14:36.044 "req_id": 1 00:14:36.044 } 00:14:36.044 Got JSON-RPC error response 00:14:36.044 response: 00:14:36.044 { 00:14:36.044 "code": -32602, 00:14:36.044 "message": "Invalid MN Q/GJk cl;GX+WJ\"FP{n{8.G+}uQD0L:-- DJ}Z^wc" 00:14:36.044 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:36.044 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:36.302 [2024-11-25 13:13:33.938229] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.559 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:36.817 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:36.817 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:36.817 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:36.817 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:36.817 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:37.075 [2024-11-25 13:13:34.484038] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:37.075 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:37.075 { 00:14:37.075 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:37.075 "listen_address": { 00:14:37.075 "trtype": "tcp", 00:14:37.075 "traddr": "", 00:14:37.075 "trsvcid": "4421" 00:14:37.075 }, 00:14:37.075 "method": "nvmf_subsystem_remove_listener", 00:14:37.075 "req_id": 1 00:14:37.075 } 00:14:37.075 Got JSON-RPC error response 00:14:37.075 response: 00:14:37.075 { 00:14:37.075 "code": -32602, 00:14:37.075 "message": "Invalid parameters" 00:14:37.075 }' 00:14:37.075 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:37.075 { 00:14:37.075 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:37.075 "listen_address": { 00:14:37.075 "trtype": "tcp", 00:14:37.075 "traddr": "", 00:14:37.075 "trsvcid": "4421" 00:14:37.075 }, 00:14:37.075 "method": "nvmf_subsystem_remove_listener", 00:14:37.075 "req_id": 1 00:14:37.075 } 00:14:37.075 Got JSON-RPC error response 00:14:37.075 response: 00:14:37.075 { 00:14:37.075 "code": -32602, 00:14:37.075 "message": "Invalid parameters" 00:14:37.075 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:37.075 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8167 -i 0 00:14:37.332 [2024-11-25 13:13:34.744845] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8167: invalid cntlid range [0-65519] 00:14:37.332 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:37.332 { 00:14:37.332 "nqn": "nqn.2016-06.io.spdk:cnode8167", 00:14:37.332 "min_cntlid": 0, 00:14:37.332 "method": "nvmf_create_subsystem", 00:14:37.332 "req_id": 1 00:14:37.332 } 00:14:37.332 Got JSON-RPC error response 00:14:37.332 response: 00:14:37.332 { 00:14:37.332 "code": -32602, 00:14:37.332 "message": "Invalid cntlid range [0-65519]" 00:14:37.332 }' 00:14:37.332 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:37.332 { 00:14:37.332 "nqn": "nqn.2016-06.io.spdk:cnode8167", 00:14:37.332 "min_cntlid": 0, 00:14:37.332 "method": "nvmf_create_subsystem", 00:14:37.332 "req_id": 1 00:14:37.332 } 00:14:37.332 Got JSON-RPC error response 00:14:37.332 response: 00:14:37.332 { 00:14:37.332 "code": -32602, 00:14:37.332 "message": "Invalid cntlid range [0-65519]" 00:14:37.332 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:37.332 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20447 -i 65520 00:14:37.588 [2024-11-25 13:13:35.025812] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20447: invalid cntlid range [65520-65519] 00:14:37.588 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:37.588 { 00:14:37.588 "nqn": "nqn.2016-06.io.spdk:cnode20447", 00:14:37.588 "min_cntlid": 65520, 00:14:37.588 "method": "nvmf_create_subsystem", 00:14:37.588 "req_id": 1 00:14:37.588 } 00:14:37.588 Got JSON-RPC error response 00:14:37.588 response: 00:14:37.588 { 00:14:37.588 "code": -32602, 00:14:37.588 "message": "Invalid cntlid range [65520-65519]" 00:14:37.588 }' 00:14:37.588 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:37.588 { 00:14:37.588 "nqn": "nqn.2016-06.io.spdk:cnode20447", 00:14:37.588 "min_cntlid": 65520, 00:14:37.588 "method": "nvmf_create_subsystem", 00:14:37.588 "req_id": 1 00:14:37.588 } 00:14:37.588 Got JSON-RPC error response 00:14:37.588 response: 00:14:37.588 { 00:14:37.588 "code": -32602, 00:14:37.588 "message": "Invalid cntlid range [65520-65519]" 00:14:37.588 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:37.588 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1097 -I 0 00:14:37.845 [2024-11-25 13:13:35.298677] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1097: invalid cntlid range [1-0] 00:14:37.845 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:37.845 { 00:14:37.845 "nqn": "nqn.2016-06.io.spdk:cnode1097", 00:14:37.845 "max_cntlid": 0, 00:14:37.845 "method": "nvmf_create_subsystem", 00:14:37.845 "req_id": 1 00:14:37.845 } 00:14:37.845 Got JSON-RPC error response 00:14:37.845 response: 00:14:37.845 { 00:14:37.845 "code": -32602, 00:14:37.845 "message": "Invalid cntlid range [1-0]" 00:14:37.845 }' 00:14:37.845 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:37.845 { 00:14:37.845 "nqn": "nqn.2016-06.io.spdk:cnode1097", 00:14:37.845 "max_cntlid": 0, 00:14:37.845 "method": "nvmf_create_subsystem", 00:14:37.845 "req_id": 1 00:14:37.845 } 00:14:37.845 Got JSON-RPC error response 00:14:37.845 response: 00:14:37.845 { 00:14:37.845 "code": -32602, 00:14:37.845 "message": "Invalid cntlid range [1-0]" 00:14:37.845 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:37.845 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13191 -I 65520 00:14:38.102 [2024-11-25 13:13:35.567578] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13191: invalid cntlid range [1-65520] 00:14:38.102 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:38.102 { 00:14:38.102 "nqn": "nqn.2016-06.io.spdk:cnode13191", 00:14:38.102 "max_cntlid": 65520, 00:14:38.102 "method": "nvmf_create_subsystem", 00:14:38.102 "req_id": 1 00:14:38.102 } 00:14:38.102 Got JSON-RPC error response 00:14:38.102 response: 00:14:38.102 { 00:14:38.102 "code": -32602, 00:14:38.102 "message": "Invalid cntlid range [1-65520]" 00:14:38.102 }' 00:14:38.102 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:38.102 { 00:14:38.102 "nqn": "nqn.2016-06.io.spdk:cnode13191", 00:14:38.102 "max_cntlid": 65520, 00:14:38.102 "method": "nvmf_create_subsystem", 00:14:38.102 "req_id": 1 00:14:38.102 } 00:14:38.102 Got JSON-RPC error response 00:14:38.102 response: 00:14:38.102 { 00:14:38.102 "code": -32602, 00:14:38.102 "message": "Invalid cntlid range [1-65520]" 00:14:38.102 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:38.102 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32030 -i 6 -I 5 00:14:38.359 [2024-11-25 13:13:35.832488] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32030: invalid cntlid range [6-5] 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:38.359 { 00:14:38.359 "nqn": "nqn.2016-06.io.spdk:cnode32030", 00:14:38.359 "min_cntlid": 6, 00:14:38.359 "max_cntlid": 5, 00:14:38.359 "method": "nvmf_create_subsystem", 00:14:38.359 "req_id": 1 00:14:38.359 } 00:14:38.359 Got JSON-RPC error response 00:14:38.359 response: 00:14:38.359 { 00:14:38.359 "code": -32602, 00:14:38.359 "message": "Invalid cntlid range [6-5]" 00:14:38.359 }' 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:38.359 { 00:14:38.359 "nqn": "nqn.2016-06.io.spdk:cnode32030", 00:14:38.359 "min_cntlid": 6, 00:14:38.359 "max_cntlid": 5, 00:14:38.359 "method": "nvmf_create_subsystem", 00:14:38.359 "req_id": 1 00:14:38.359 } 00:14:38.359 Got JSON-RPC error response 00:14:38.359 response: 00:14:38.359 { 00:14:38.359 "code": -32602, 00:14:38.359 "message": "Invalid cntlid range [6-5]" 00:14:38.359 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:38.359 { 00:14:38.359 "name": "foobar", 00:14:38.359 "method": "nvmf_delete_target", 00:14:38.359 "req_id": 1 00:14:38.359 } 00:14:38.359 Got JSON-RPC error response 00:14:38.359 response: 00:14:38.359 { 00:14:38.359 "code": -32602, 00:14:38.359 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:38.359 }' 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:38.359 { 00:14:38.359 "name": "foobar", 00:14:38.359 "method": "nvmf_delete_target", 00:14:38.359 "req_id": 1 00:14:38.359 } 00:14:38.359 Got JSON-RPC error response 00:14:38.359 response: 00:14:38.359 { 00:14:38.359 "code": -32602, 00:14:38.359 "message": "The specified target doesn't exist, cannot delete it." 00:14:38.359 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.359 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.359 rmmod nvme_tcp 00:14:38.359 rmmod nvme_fabrics 00:14:38.359 rmmod nvme_keyring 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3133199 ']' 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3133199 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3133199 ']' 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3133199 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133199 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133199' 00:14:38.617 killing process with pid 3133199 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3133199 00:14:38.617 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3133199 00:14:38.875 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.875 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.876 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:40.778 00:14:40.778 real 0m9.167s 00:14:40.778 user 0m21.528s 00:14:40.778 sys 0m2.660s 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:40.778 ************************************ 00:14:40.778 END TEST nvmf_invalid 00:14:40.778 ************************************ 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.778 ************************************ 00:14:40.778 START TEST nvmf_connect_stress 00:14:40.778 ************************************ 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:40.778 * Looking for test storage... 00:14:40.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:40.778 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:41.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.038 --rc genhtml_branch_coverage=1 00:14:41.038 --rc genhtml_function_coverage=1 00:14:41.038 --rc genhtml_legend=1 00:14:41.038 --rc geninfo_all_blocks=1 00:14:41.038 --rc geninfo_unexecuted_blocks=1 00:14:41.038 00:14:41.038 ' 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:41.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.038 --rc genhtml_branch_coverage=1 00:14:41.038 --rc genhtml_function_coverage=1 00:14:41.038 --rc genhtml_legend=1 00:14:41.038 --rc geninfo_all_blocks=1 00:14:41.038 --rc geninfo_unexecuted_blocks=1 00:14:41.038 00:14:41.038 ' 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:41.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.038 --rc genhtml_branch_coverage=1 00:14:41.038 --rc genhtml_function_coverage=1 00:14:41.038 --rc genhtml_legend=1 00:14:41.038 --rc geninfo_all_blocks=1 00:14:41.038 --rc geninfo_unexecuted_blocks=1 00:14:41.038 00:14:41.038 ' 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:41.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.038 --rc genhtml_branch_coverage=1 00:14:41.038 --rc genhtml_function_coverage=1 00:14:41.038 --rc genhtml_legend=1 00:14:41.038 --rc geninfo_all_blocks=1 00:14:41.038 --rc geninfo_unexecuted_blocks=1 00:14:41.038 00:14:41.038 ' 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.038 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.039 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:42.962 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:42.962 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:42.962 Found net devices under 0000:09:00.0: cvl_0_0 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:42.962 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:42.962 Found net devices under 0000:09:00.1: cvl_0_1 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:42.963 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:43.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:14:43.223 00:14:43.223 --- 10.0.0.2 ping statistics --- 00:14:43.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.223 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:14:43.223 00:14:43.223 --- 10.0.0.1 ping statistics --- 00:14:43.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.223 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3135844 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3135844 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3135844 ']' 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.223 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.223 [2024-11-25 13:13:40.794967] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:14:43.223 [2024-11-25 13:13:40.795049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.223 [2024-11-25 13:13:40.865895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:43.483 [2024-11-25 13:13:40.922128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.483 [2024-11-25 13:13:40.922183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.483 [2024-11-25 13:13:40.922196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.483 [2024-11-25 13:13:40.922206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.483 [2024-11-25 13:13:40.922215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.483 [2024-11-25 13:13:40.923778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.483 [2024-11-25 13:13:40.923832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.483 [2024-11-25 13:13:40.923835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 [2024-11-25 13:13:41.072653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 [2024-11-25 13:13:41.090006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.483 NULL1 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3135915 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:43.483 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.484 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.742 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.000 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.000 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:44.000 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.000 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.000 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.257 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.257 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:44.257 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.257 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.258 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.514 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.514 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:44.514 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.514 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.514 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.078 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.078 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:45.078 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.078 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.078 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.367 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.367 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:45.367 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.367 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.367 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.641 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.641 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:45.641 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.641 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.641 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.897 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.897 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:45.897 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.897 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.897 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.153 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.153 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:46.153 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.153 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.153 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.409 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.409 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:46.409 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.409 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.409 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.972 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.972 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:46.972 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.972 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.972 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.229 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.229 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:47.229 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.229 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.229 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.487 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.487 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:47.487 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.487 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.487 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.744 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.744 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:47.744 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.744 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.744 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.002 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.002 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:48.002 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.002 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.002 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.567 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.567 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:48.567 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.567 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.567 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.825 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.825 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:48.825 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.825 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.825 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.083 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.083 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:49.083 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.083 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.083 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.340 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.340 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:49.340 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.340 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.340 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.597 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.597 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:49.598 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.598 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.598 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.163 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.163 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:50.163 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.163 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.163 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.421 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.421 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:50.421 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.421 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.421 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.678 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.678 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:50.678 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.678 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.678 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.936 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.936 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:50.936 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.936 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.936 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.501 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.501 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:51.501 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.501 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.501 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:51.766 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.767 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.767 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.027 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.027 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:52.027 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.027 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.027 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.284 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.284 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:52.284 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.284 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.284 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.542 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.542 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:52.542 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.542 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.542 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.106 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.106 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:53.106 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.106 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.106 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.363 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.363 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:53.363 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.363 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.363 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.620 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.620 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:53.620 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.620 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.620 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.620 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3135915 00:14:53.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3135915) - No such process 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3135915 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:53.877 rmmod nvme_tcp 00:14:53.877 rmmod nvme_fabrics 00:14:53.877 rmmod nvme_keyring 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3135844 ']' 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3135844 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3135844 ']' 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3135844 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135844 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135844' 00:14:53.877 killing process with pid 3135844 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3135844 00:14:53.877 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3135844 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.135 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:56.664 00:14:56.664 real 0m15.435s 00:14:56.664 user 0m38.748s 00:14:56.664 sys 0m5.783s 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.664 ************************************ 00:14:56.664 END TEST nvmf_connect_stress 00:14:56.664 ************************************ 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.664 ************************************ 00:14:56.664 START TEST nvmf_fused_ordering 00:14:56.664 ************************************ 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:56.664 * Looking for test storage... 00:14:56.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.664 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.664 --rc genhtml_branch_coverage=1 00:14:56.664 --rc genhtml_function_coverage=1 00:14:56.664 --rc genhtml_legend=1 00:14:56.664 --rc geninfo_all_blocks=1 00:14:56.664 --rc geninfo_unexecuted_blocks=1 00:14:56.664 00:14:56.664 ' 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.664 --rc genhtml_branch_coverage=1 00:14:56.664 --rc genhtml_function_coverage=1 00:14:56.664 --rc genhtml_legend=1 00:14:56.664 --rc geninfo_all_blocks=1 00:14:56.664 --rc geninfo_unexecuted_blocks=1 00:14:56.664 00:14:56.664 ' 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.664 --rc genhtml_branch_coverage=1 00:14:56.664 --rc genhtml_function_coverage=1 00:14:56.664 --rc genhtml_legend=1 00:14:56.664 --rc geninfo_all_blocks=1 00:14:56.664 --rc geninfo_unexecuted_blocks=1 00:14:56.664 00:14:56.664 ' 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.664 --rc genhtml_branch_coverage=1 00:14:56.664 --rc genhtml_function_coverage=1 00:14:56.664 --rc genhtml_legend=1 00:14:56.664 --rc geninfo_all_blocks=1 00:14:56.664 --rc geninfo_unexecuted_blocks=1 00:14:56.664 00:14:56.664 ' 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.664 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:56.665 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.568 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:58.569 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:58.569 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:58.569 Found net devices under 0000:09:00.0: cvl_0_0 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:58.569 Found net devices under 0000:09:00.1: cvl_0_1 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.569 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:58.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:14:58.828 00:14:58.828 --- 10.0.0.2 ping statistics --- 00:14:58.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.828 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:14:58.828 00:14:58.828 --- 10.0.0.1 ping statistics --- 00:14:58.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.828 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.828 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3139073 00:14:58.829 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:58.829 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3139073 00:14:58.829 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3139073 ']' 00:14:58.829 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.829 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.829 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.829 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.829 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.829 [2024-11-25 13:13:56.409756] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:14:58.829 [2024-11-25 13:13:56.409837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.829 [2024-11-25 13:13:56.478365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.088 [2024-11-25 13:13:56.536643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.088 [2024-11-25 13:13:56.536691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.088 [2024-11-25 13:13:56.536720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.088 [2024-11-25 13:13:56.536732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.088 [2024-11-25 13:13:56.536748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.088 [2024-11-25 13:13:56.537344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.088 [2024-11-25 13:13:56.711918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.088 [2024-11-25 13:13:56.728108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.088 NULL1 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.088 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.346 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.346 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:59.346 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.346 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:59.346 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.346 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:59.346 [2024-11-25 13:13:56.775369] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:14:59.346 [2024-11-25 13:13:56.775410] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139222 ] 00:14:59.603 Attached to nqn.2016-06.io.spdk:cnode1 00:14:59.603 Namespace ID: 1 size: 1GB 00:14:59.603 fused_ordering(0) 00:14:59.603 fused_ordering(1) 00:14:59.603 fused_ordering(2) 00:14:59.603 fused_ordering(3) 00:14:59.603 fused_ordering(4) 00:14:59.603 fused_ordering(5) 00:14:59.603 fused_ordering(6) 00:14:59.603 fused_ordering(7) 00:14:59.603 fused_ordering(8) 00:14:59.603 fused_ordering(9) 00:14:59.603 fused_ordering(10) 00:14:59.603 fused_ordering(11) 00:14:59.603 fused_ordering(12) 00:14:59.603 fused_ordering(13) 00:14:59.603 fused_ordering(14) 00:14:59.603 fused_ordering(15) 00:14:59.603 fused_ordering(16) 00:14:59.603 fused_ordering(17) 00:14:59.603 fused_ordering(18) 00:14:59.603 fused_ordering(19) 00:14:59.603 fused_ordering(20) 00:14:59.603 fused_ordering(21) 00:14:59.603 fused_ordering(22) 00:14:59.603 fused_ordering(23) 00:14:59.603 fused_ordering(24) 00:14:59.603 fused_ordering(25) 00:14:59.603 fused_ordering(26) 00:14:59.603 fused_ordering(27) 00:14:59.603 fused_ordering(28) 00:14:59.603 fused_ordering(29) 00:14:59.603 fused_ordering(30) 00:14:59.603 fused_ordering(31) 00:14:59.603 fused_ordering(32) 00:14:59.603 fused_ordering(33) 00:14:59.603 fused_ordering(34) 00:14:59.603 fused_ordering(35) 00:14:59.603 fused_ordering(36) 00:14:59.603 fused_ordering(37) 00:14:59.603 fused_ordering(38) 00:14:59.603 fused_ordering(39) 00:14:59.603 fused_ordering(40) 00:14:59.603 fused_ordering(41) 00:14:59.603 fused_ordering(42) 00:14:59.603 fused_ordering(43) 00:14:59.603 fused_ordering(44) 00:14:59.603 fused_ordering(45) 00:14:59.603 fused_ordering(46) 00:14:59.603 fused_ordering(47) 00:14:59.603 fused_ordering(48) 00:14:59.604 fused_ordering(49) 00:14:59.604 fused_ordering(50) 00:14:59.604 fused_ordering(51) 00:14:59.604 fused_ordering(52) 00:14:59.604 fused_ordering(53) 00:14:59.604 fused_ordering(54) 00:14:59.604 fused_ordering(55) 00:14:59.604 fused_ordering(56) 00:14:59.604 fused_ordering(57) 00:14:59.604 fused_ordering(58) 00:14:59.604 fused_ordering(59) 00:14:59.604 fused_ordering(60) 00:14:59.604 fused_ordering(61) 00:14:59.604 fused_ordering(62) 00:14:59.604 fused_ordering(63) 00:14:59.604 fused_ordering(64) 00:14:59.604 fused_ordering(65) 00:14:59.604 fused_ordering(66) 00:14:59.604 fused_ordering(67) 00:14:59.604 fused_ordering(68) 00:14:59.604 fused_ordering(69) 00:14:59.604 fused_ordering(70) 00:14:59.604 fused_ordering(71) 00:14:59.604 fused_ordering(72) 00:14:59.604 fused_ordering(73) 00:14:59.604 fused_ordering(74) 00:14:59.604 fused_ordering(75) 00:14:59.604 fused_ordering(76) 00:14:59.604 fused_ordering(77) 00:14:59.604 fused_ordering(78) 00:14:59.604 fused_ordering(79) 00:14:59.604 fused_ordering(80) 00:14:59.604 fused_ordering(81) 00:14:59.604 fused_ordering(82) 00:14:59.604 fused_ordering(83) 00:14:59.604 fused_ordering(84) 00:14:59.604 fused_ordering(85) 00:14:59.604 fused_ordering(86) 00:14:59.604 fused_ordering(87) 00:14:59.604 fused_ordering(88) 00:14:59.604 fused_ordering(89) 00:14:59.604 fused_ordering(90) 00:14:59.604 fused_ordering(91) 00:14:59.604 fused_ordering(92) 00:14:59.604 fused_ordering(93) 00:14:59.604 fused_ordering(94) 00:14:59.604 fused_ordering(95) 00:14:59.604 fused_ordering(96) 00:14:59.604 fused_ordering(97) 00:14:59.604 fused_ordering(98) 00:14:59.604 fused_ordering(99) 00:14:59.604 fused_ordering(100) 00:14:59.604 fused_ordering(101) 00:14:59.604 fused_ordering(102) 00:14:59.604 fused_ordering(103) 00:14:59.604 fused_ordering(104) 00:14:59.604 fused_ordering(105) 00:14:59.604 fused_ordering(106) 00:14:59.604 fused_ordering(107) 00:14:59.604 fused_ordering(108) 00:14:59.604 fused_ordering(109) 00:14:59.604 fused_ordering(110) 00:14:59.604 fused_ordering(111) 00:14:59.604 fused_ordering(112) 00:14:59.604 fused_ordering(113) 00:14:59.604 fused_ordering(114) 00:14:59.604 fused_ordering(115) 00:14:59.604 fused_ordering(116) 00:14:59.604 fused_ordering(117) 00:14:59.604 fused_ordering(118) 00:14:59.604 fused_ordering(119) 00:14:59.604 fused_ordering(120) 00:14:59.604 fused_ordering(121) 00:14:59.604 fused_ordering(122) 00:14:59.604 fused_ordering(123) 00:14:59.604 fused_ordering(124) 00:14:59.604 fused_ordering(125) 00:14:59.604 fused_ordering(126) 00:14:59.604 fused_ordering(127) 00:14:59.604 fused_ordering(128) 00:14:59.604 fused_ordering(129) 00:14:59.604 fused_ordering(130) 00:14:59.604 fused_ordering(131) 00:14:59.604 fused_ordering(132) 00:14:59.604 fused_ordering(133) 00:14:59.604 fused_ordering(134) 00:14:59.604 fused_ordering(135) 00:14:59.604 fused_ordering(136) 00:14:59.604 fused_ordering(137) 00:14:59.604 fused_ordering(138) 00:14:59.604 fused_ordering(139) 00:14:59.604 fused_ordering(140) 00:14:59.604 fused_ordering(141) 00:14:59.604 fused_ordering(142) 00:14:59.604 fused_ordering(143) 00:14:59.604 fused_ordering(144) 00:14:59.604 fused_ordering(145) 00:14:59.604 fused_ordering(146) 00:14:59.604 fused_ordering(147) 00:14:59.604 fused_ordering(148) 00:14:59.604 fused_ordering(149) 00:14:59.604 fused_ordering(150) 00:14:59.604 fused_ordering(151) 00:14:59.604 fused_ordering(152) 00:14:59.604 fused_ordering(153) 00:14:59.604 fused_ordering(154) 00:14:59.604 fused_ordering(155) 00:14:59.604 fused_ordering(156) 00:14:59.604 fused_ordering(157) 00:14:59.604 fused_ordering(158) 00:14:59.604 fused_ordering(159) 00:14:59.604 fused_ordering(160) 00:14:59.604 fused_ordering(161) 00:14:59.604 fused_ordering(162) 00:14:59.604 fused_ordering(163) 00:14:59.604 fused_ordering(164) 00:14:59.604 fused_ordering(165) 00:14:59.604 fused_ordering(166) 00:14:59.604 fused_ordering(167) 00:14:59.604 fused_ordering(168) 00:14:59.604 fused_ordering(169) 00:14:59.604 fused_ordering(170) 00:14:59.604 fused_ordering(171) 00:14:59.604 fused_ordering(172) 00:14:59.604 fused_ordering(173) 00:14:59.604 fused_ordering(174) 00:14:59.604 fused_ordering(175) 00:14:59.604 fused_ordering(176) 00:14:59.604 fused_ordering(177) 00:14:59.604 fused_ordering(178) 00:14:59.604 fused_ordering(179) 00:14:59.604 fused_ordering(180) 00:14:59.604 fused_ordering(181) 00:14:59.604 fused_ordering(182) 00:14:59.604 fused_ordering(183) 00:14:59.604 fused_ordering(184) 00:14:59.604 fused_ordering(185) 00:14:59.604 fused_ordering(186) 00:14:59.604 fused_ordering(187) 00:14:59.604 fused_ordering(188) 00:14:59.604 fused_ordering(189) 00:14:59.604 fused_ordering(190) 00:14:59.604 fused_ordering(191) 00:14:59.604 fused_ordering(192) 00:14:59.604 fused_ordering(193) 00:14:59.604 fused_ordering(194) 00:14:59.604 fused_ordering(195) 00:14:59.604 fused_ordering(196) 00:14:59.604 fused_ordering(197) 00:14:59.604 fused_ordering(198) 00:14:59.604 fused_ordering(199) 00:14:59.604 fused_ordering(200) 00:14:59.604 fused_ordering(201) 00:14:59.604 fused_ordering(202) 00:14:59.604 fused_ordering(203) 00:14:59.604 fused_ordering(204) 00:14:59.604 fused_ordering(205) 00:15:00.169 fused_ordering(206) 00:15:00.169 fused_ordering(207) 00:15:00.169 fused_ordering(208) 00:15:00.169 fused_ordering(209) 00:15:00.169 fused_ordering(210) 00:15:00.169 fused_ordering(211) 00:15:00.169 fused_ordering(212) 00:15:00.169 fused_ordering(213) 00:15:00.169 fused_ordering(214) 00:15:00.169 fused_ordering(215) 00:15:00.169 fused_ordering(216) 00:15:00.169 fused_ordering(217) 00:15:00.169 fused_ordering(218) 00:15:00.169 fused_ordering(219) 00:15:00.169 fused_ordering(220) 00:15:00.169 fused_ordering(221) 00:15:00.169 fused_ordering(222) 00:15:00.169 fused_ordering(223) 00:15:00.169 fused_ordering(224) 00:15:00.169 fused_ordering(225) 00:15:00.169 fused_ordering(226) 00:15:00.169 fused_ordering(227) 00:15:00.169 fused_ordering(228) 00:15:00.169 fused_ordering(229) 00:15:00.169 fused_ordering(230) 00:15:00.169 fused_ordering(231) 00:15:00.169 fused_ordering(232) 00:15:00.169 fused_ordering(233) 00:15:00.169 fused_ordering(234) 00:15:00.169 fused_ordering(235) 00:15:00.169 fused_ordering(236) 00:15:00.169 fused_ordering(237) 00:15:00.169 fused_ordering(238) 00:15:00.169 fused_ordering(239) 00:15:00.169 fused_ordering(240) 00:15:00.169 fused_ordering(241) 00:15:00.169 fused_ordering(242) 00:15:00.169 fused_ordering(243) 00:15:00.169 fused_ordering(244) 00:15:00.169 fused_ordering(245) 00:15:00.169 fused_ordering(246) 00:15:00.169 fused_ordering(247) 00:15:00.169 fused_ordering(248) 00:15:00.169 fused_ordering(249) 00:15:00.169 fused_ordering(250) 00:15:00.169 fused_ordering(251) 00:15:00.169 fused_ordering(252) 00:15:00.169 fused_ordering(253) 00:15:00.169 fused_ordering(254) 00:15:00.169 fused_ordering(255) 00:15:00.169 fused_ordering(256) 00:15:00.169 fused_ordering(257) 00:15:00.169 fused_ordering(258) 00:15:00.169 fused_ordering(259) 00:15:00.169 fused_ordering(260) 00:15:00.169 fused_ordering(261) 00:15:00.169 fused_ordering(262) 00:15:00.169 fused_ordering(263) 00:15:00.169 fused_ordering(264) 00:15:00.169 fused_ordering(265) 00:15:00.169 fused_ordering(266) 00:15:00.169 fused_ordering(267) 00:15:00.169 fused_ordering(268) 00:15:00.169 fused_ordering(269) 00:15:00.169 fused_ordering(270) 00:15:00.169 fused_ordering(271) 00:15:00.169 fused_ordering(272) 00:15:00.169 fused_ordering(273) 00:15:00.169 fused_ordering(274) 00:15:00.169 fused_ordering(275) 00:15:00.169 fused_ordering(276) 00:15:00.169 fused_ordering(277) 00:15:00.169 fused_ordering(278) 00:15:00.169 fused_ordering(279) 00:15:00.169 fused_ordering(280) 00:15:00.169 fused_ordering(281) 00:15:00.169 fused_ordering(282) 00:15:00.169 fused_ordering(283) 00:15:00.169 fused_ordering(284) 00:15:00.169 fused_ordering(285) 00:15:00.169 fused_ordering(286) 00:15:00.169 fused_ordering(287) 00:15:00.169 fused_ordering(288) 00:15:00.169 fused_ordering(289) 00:15:00.169 fused_ordering(290) 00:15:00.169 fused_ordering(291) 00:15:00.169 fused_ordering(292) 00:15:00.169 fused_ordering(293) 00:15:00.169 fused_ordering(294) 00:15:00.169 fused_ordering(295) 00:15:00.169 fused_ordering(296) 00:15:00.169 fused_ordering(297) 00:15:00.169 fused_ordering(298) 00:15:00.169 fused_ordering(299) 00:15:00.169 fused_ordering(300) 00:15:00.169 fused_ordering(301) 00:15:00.169 fused_ordering(302) 00:15:00.169 fused_ordering(303) 00:15:00.169 fused_ordering(304) 00:15:00.169 fused_ordering(305) 00:15:00.169 fused_ordering(306) 00:15:00.169 fused_ordering(307) 00:15:00.169 fused_ordering(308) 00:15:00.169 fused_ordering(309) 00:15:00.169 fused_ordering(310) 00:15:00.169 fused_ordering(311) 00:15:00.169 fused_ordering(312) 00:15:00.169 fused_ordering(313) 00:15:00.169 fused_ordering(314) 00:15:00.169 fused_ordering(315) 00:15:00.169 fused_ordering(316) 00:15:00.169 fused_ordering(317) 00:15:00.169 fused_ordering(318) 00:15:00.169 fused_ordering(319) 00:15:00.169 fused_ordering(320) 00:15:00.169 fused_ordering(321) 00:15:00.169 fused_ordering(322) 00:15:00.169 fused_ordering(323) 00:15:00.169 fused_ordering(324) 00:15:00.169 fused_ordering(325) 00:15:00.169 fused_ordering(326) 00:15:00.169 fused_ordering(327) 00:15:00.169 fused_ordering(328) 00:15:00.169 fused_ordering(329) 00:15:00.169 fused_ordering(330) 00:15:00.169 fused_ordering(331) 00:15:00.169 fused_ordering(332) 00:15:00.169 fused_ordering(333) 00:15:00.169 fused_ordering(334) 00:15:00.169 fused_ordering(335) 00:15:00.169 fused_ordering(336) 00:15:00.169 fused_ordering(337) 00:15:00.169 fused_ordering(338) 00:15:00.169 fused_ordering(339) 00:15:00.169 fused_ordering(340) 00:15:00.169 fused_ordering(341) 00:15:00.169 fused_ordering(342) 00:15:00.169 fused_ordering(343) 00:15:00.169 fused_ordering(344) 00:15:00.169 fused_ordering(345) 00:15:00.169 fused_ordering(346) 00:15:00.169 fused_ordering(347) 00:15:00.169 fused_ordering(348) 00:15:00.169 fused_ordering(349) 00:15:00.169 fused_ordering(350) 00:15:00.169 fused_ordering(351) 00:15:00.169 fused_ordering(352) 00:15:00.169 fused_ordering(353) 00:15:00.169 fused_ordering(354) 00:15:00.169 fused_ordering(355) 00:15:00.169 fused_ordering(356) 00:15:00.169 fused_ordering(357) 00:15:00.169 fused_ordering(358) 00:15:00.169 fused_ordering(359) 00:15:00.169 fused_ordering(360) 00:15:00.169 fused_ordering(361) 00:15:00.169 fused_ordering(362) 00:15:00.169 fused_ordering(363) 00:15:00.169 fused_ordering(364) 00:15:00.169 fused_ordering(365) 00:15:00.169 fused_ordering(366) 00:15:00.169 fused_ordering(367) 00:15:00.169 fused_ordering(368) 00:15:00.170 fused_ordering(369) 00:15:00.170 fused_ordering(370) 00:15:00.170 fused_ordering(371) 00:15:00.170 fused_ordering(372) 00:15:00.170 fused_ordering(373) 00:15:00.170 fused_ordering(374) 00:15:00.170 fused_ordering(375) 00:15:00.170 fused_ordering(376) 00:15:00.170 fused_ordering(377) 00:15:00.170 fused_ordering(378) 00:15:00.170 fused_ordering(379) 00:15:00.170 fused_ordering(380) 00:15:00.170 fused_ordering(381) 00:15:00.170 fused_ordering(382) 00:15:00.170 fused_ordering(383) 00:15:00.170 fused_ordering(384) 00:15:00.170 fused_ordering(385) 00:15:00.170 fused_ordering(386) 00:15:00.170 fused_ordering(387) 00:15:00.170 fused_ordering(388) 00:15:00.170 fused_ordering(389) 00:15:00.170 fused_ordering(390) 00:15:00.170 fused_ordering(391) 00:15:00.170 fused_ordering(392) 00:15:00.170 fused_ordering(393) 00:15:00.170 fused_ordering(394) 00:15:00.170 fused_ordering(395) 00:15:00.170 fused_ordering(396) 00:15:00.170 fused_ordering(397) 00:15:00.170 fused_ordering(398) 00:15:00.170 fused_ordering(399) 00:15:00.170 fused_ordering(400) 00:15:00.170 fused_ordering(401) 00:15:00.170 fused_ordering(402) 00:15:00.170 fused_ordering(403) 00:15:00.170 fused_ordering(404) 00:15:00.170 fused_ordering(405) 00:15:00.170 fused_ordering(406) 00:15:00.170 fused_ordering(407) 00:15:00.170 fused_ordering(408) 00:15:00.170 fused_ordering(409) 00:15:00.170 fused_ordering(410) 00:15:00.427 fused_ordering(411) 00:15:00.427 fused_ordering(412) 00:15:00.427 fused_ordering(413) 00:15:00.427 fused_ordering(414) 00:15:00.427 fused_ordering(415) 00:15:00.427 fused_ordering(416) 00:15:00.427 fused_ordering(417) 00:15:00.427 fused_ordering(418) 00:15:00.427 fused_ordering(419) 00:15:00.427 fused_ordering(420) 00:15:00.427 fused_ordering(421) 00:15:00.427 fused_ordering(422) 00:15:00.427 fused_ordering(423) 00:15:00.427 fused_ordering(424) 00:15:00.427 fused_ordering(425) 00:15:00.427 fused_ordering(426) 00:15:00.427 fused_ordering(427) 00:15:00.427 fused_ordering(428) 00:15:00.427 fused_ordering(429) 00:15:00.427 fused_ordering(430) 00:15:00.427 fused_ordering(431) 00:15:00.427 fused_ordering(432) 00:15:00.427 fused_ordering(433) 00:15:00.427 fused_ordering(434) 00:15:00.427 fused_ordering(435) 00:15:00.427 fused_ordering(436) 00:15:00.427 fused_ordering(437) 00:15:00.427 fused_ordering(438) 00:15:00.427 fused_ordering(439) 00:15:00.427 fused_ordering(440) 00:15:00.427 fused_ordering(441) 00:15:00.427 fused_ordering(442) 00:15:00.427 fused_ordering(443) 00:15:00.427 fused_ordering(444) 00:15:00.427 fused_ordering(445) 00:15:00.427 fused_ordering(446) 00:15:00.427 fused_ordering(447) 00:15:00.427 fused_ordering(448) 00:15:00.427 fused_ordering(449) 00:15:00.427 fused_ordering(450) 00:15:00.427 fused_ordering(451) 00:15:00.427 fused_ordering(452) 00:15:00.427 fused_ordering(453) 00:15:00.427 fused_ordering(454) 00:15:00.427 fused_ordering(455) 00:15:00.427 fused_ordering(456) 00:15:00.427 fused_ordering(457) 00:15:00.427 fused_ordering(458) 00:15:00.427 fused_ordering(459) 00:15:00.427 fused_ordering(460) 00:15:00.428 fused_ordering(461) 00:15:00.428 fused_ordering(462) 00:15:00.428 fused_ordering(463) 00:15:00.428 fused_ordering(464) 00:15:00.428 fused_ordering(465) 00:15:00.428 fused_ordering(466) 00:15:00.428 fused_ordering(467) 00:15:00.428 fused_ordering(468) 00:15:00.428 fused_ordering(469) 00:15:00.428 fused_ordering(470) 00:15:00.428 fused_ordering(471) 00:15:00.428 fused_ordering(472) 00:15:00.428 fused_ordering(473) 00:15:00.428 fused_ordering(474) 00:15:00.428 fused_ordering(475) 00:15:00.428 fused_ordering(476) 00:15:00.428 fused_ordering(477) 00:15:00.428 fused_ordering(478) 00:15:00.428 fused_ordering(479) 00:15:00.428 fused_ordering(480) 00:15:00.428 fused_ordering(481) 00:15:00.428 fused_ordering(482) 00:15:00.428 fused_ordering(483) 00:15:00.428 fused_ordering(484) 00:15:00.428 fused_ordering(485) 00:15:00.428 fused_ordering(486) 00:15:00.428 fused_ordering(487) 00:15:00.428 fused_ordering(488) 00:15:00.428 fused_ordering(489) 00:15:00.428 fused_ordering(490) 00:15:00.428 fused_ordering(491) 00:15:00.428 fused_ordering(492) 00:15:00.428 fused_ordering(493) 00:15:00.428 fused_ordering(494) 00:15:00.428 fused_ordering(495) 00:15:00.428 fused_ordering(496) 00:15:00.428 fused_ordering(497) 00:15:00.428 fused_ordering(498) 00:15:00.428 fused_ordering(499) 00:15:00.428 fused_ordering(500) 00:15:00.428 fused_ordering(501) 00:15:00.428 fused_ordering(502) 00:15:00.428 fused_ordering(503) 00:15:00.428 fused_ordering(504) 00:15:00.428 fused_ordering(505) 00:15:00.428 fused_ordering(506) 00:15:00.428 fused_ordering(507) 00:15:00.428 fused_ordering(508) 00:15:00.428 fused_ordering(509) 00:15:00.428 fused_ordering(510) 00:15:00.428 fused_ordering(511) 00:15:00.428 fused_ordering(512) 00:15:00.428 fused_ordering(513) 00:15:00.428 fused_ordering(514) 00:15:00.428 fused_ordering(515) 00:15:00.428 fused_ordering(516) 00:15:00.428 fused_ordering(517) 00:15:00.428 fused_ordering(518) 00:15:00.428 fused_ordering(519) 00:15:00.428 fused_ordering(520) 00:15:00.428 fused_ordering(521) 00:15:00.428 fused_ordering(522) 00:15:00.428 fused_ordering(523) 00:15:00.428 fused_ordering(524) 00:15:00.428 fused_ordering(525) 00:15:00.428 fused_ordering(526) 00:15:00.428 fused_ordering(527) 00:15:00.428 fused_ordering(528) 00:15:00.428 fused_ordering(529) 00:15:00.428 fused_ordering(530) 00:15:00.428 fused_ordering(531) 00:15:00.428 fused_ordering(532) 00:15:00.428 fused_ordering(533) 00:15:00.428 fused_ordering(534) 00:15:00.428 fused_ordering(535) 00:15:00.428 fused_ordering(536) 00:15:00.428 fused_ordering(537) 00:15:00.428 fused_ordering(538) 00:15:00.428 fused_ordering(539) 00:15:00.428 fused_ordering(540) 00:15:00.428 fused_ordering(541) 00:15:00.428 fused_ordering(542) 00:15:00.428 fused_ordering(543) 00:15:00.428 fused_ordering(544) 00:15:00.428 fused_ordering(545) 00:15:00.428 fused_ordering(546) 00:15:00.428 fused_ordering(547) 00:15:00.428 fused_ordering(548) 00:15:00.428 fused_ordering(549) 00:15:00.428 fused_ordering(550) 00:15:00.428 fused_ordering(551) 00:15:00.428 fused_ordering(552) 00:15:00.428 fused_ordering(553) 00:15:00.428 fused_ordering(554) 00:15:00.428 fused_ordering(555) 00:15:00.428 fused_ordering(556) 00:15:00.428 fused_ordering(557) 00:15:00.428 fused_ordering(558) 00:15:00.428 fused_ordering(559) 00:15:00.428 fused_ordering(560) 00:15:00.428 fused_ordering(561) 00:15:00.428 fused_ordering(562) 00:15:00.428 fused_ordering(563) 00:15:00.428 fused_ordering(564) 00:15:00.428 fused_ordering(565) 00:15:00.428 fused_ordering(566) 00:15:00.428 fused_ordering(567) 00:15:00.428 fused_ordering(568) 00:15:00.428 fused_ordering(569) 00:15:00.428 fused_ordering(570) 00:15:00.428 fused_ordering(571) 00:15:00.428 fused_ordering(572) 00:15:00.428 fused_ordering(573) 00:15:00.428 fused_ordering(574) 00:15:00.428 fused_ordering(575) 00:15:00.428 fused_ordering(576) 00:15:00.428 fused_ordering(577) 00:15:00.428 fused_ordering(578) 00:15:00.428 fused_ordering(579) 00:15:00.428 fused_ordering(580) 00:15:00.428 fused_ordering(581) 00:15:00.428 fused_ordering(582) 00:15:00.428 fused_ordering(583) 00:15:00.428 fused_ordering(584) 00:15:00.428 fused_ordering(585) 00:15:00.428 fused_ordering(586) 00:15:00.428 fused_ordering(587) 00:15:00.428 fused_ordering(588) 00:15:00.428 fused_ordering(589) 00:15:00.428 fused_ordering(590) 00:15:00.428 fused_ordering(591) 00:15:00.428 fused_ordering(592) 00:15:00.428 fused_ordering(593) 00:15:00.428 fused_ordering(594) 00:15:00.428 fused_ordering(595) 00:15:00.428 fused_ordering(596) 00:15:00.428 fused_ordering(597) 00:15:00.428 fused_ordering(598) 00:15:00.428 fused_ordering(599) 00:15:00.428 fused_ordering(600) 00:15:00.428 fused_ordering(601) 00:15:00.428 fused_ordering(602) 00:15:00.428 fused_ordering(603) 00:15:00.428 fused_ordering(604) 00:15:00.428 fused_ordering(605) 00:15:00.428 fused_ordering(606) 00:15:00.428 fused_ordering(607) 00:15:00.428 fused_ordering(608) 00:15:00.428 fused_ordering(609) 00:15:00.428 fused_ordering(610) 00:15:00.428 fused_ordering(611) 00:15:00.428 fused_ordering(612) 00:15:00.428 fused_ordering(613) 00:15:00.428 fused_ordering(614) 00:15:00.428 fused_ordering(615) 00:15:00.993 fused_ordering(616) 00:15:00.993 fused_ordering(617) 00:15:00.993 fused_ordering(618) 00:15:00.993 fused_ordering(619) 00:15:00.993 fused_ordering(620) 00:15:00.993 fused_ordering(621) 00:15:00.993 fused_ordering(622) 00:15:00.993 fused_ordering(623) 00:15:00.993 fused_ordering(624) 00:15:00.993 fused_ordering(625) 00:15:00.993 fused_ordering(626) 00:15:00.993 fused_ordering(627) 00:15:00.993 fused_ordering(628) 00:15:00.993 fused_ordering(629) 00:15:00.993 fused_ordering(630) 00:15:00.993 fused_ordering(631) 00:15:00.993 fused_ordering(632) 00:15:00.993 fused_ordering(633) 00:15:00.993 fused_ordering(634) 00:15:00.993 fused_ordering(635) 00:15:00.993 fused_ordering(636) 00:15:00.993 fused_ordering(637) 00:15:00.993 fused_ordering(638) 00:15:00.993 fused_ordering(639) 00:15:00.993 fused_ordering(640) 00:15:00.993 fused_ordering(641) 00:15:00.993 fused_ordering(642) 00:15:00.993 fused_ordering(643) 00:15:00.993 fused_ordering(644) 00:15:00.993 fused_ordering(645) 00:15:00.993 fused_ordering(646) 00:15:00.993 fused_ordering(647) 00:15:00.993 fused_ordering(648) 00:15:00.993 fused_ordering(649) 00:15:00.993 fused_ordering(650) 00:15:00.993 fused_ordering(651) 00:15:00.993 fused_ordering(652) 00:15:00.993 fused_ordering(653) 00:15:00.993 fused_ordering(654) 00:15:00.993 fused_ordering(655) 00:15:00.993 fused_ordering(656) 00:15:00.993 fused_ordering(657) 00:15:00.993 fused_ordering(658) 00:15:00.993 fused_ordering(659) 00:15:00.993 fused_ordering(660) 00:15:00.993 fused_ordering(661) 00:15:00.993 fused_ordering(662) 00:15:00.993 fused_ordering(663) 00:15:00.993 fused_ordering(664) 00:15:00.993 fused_ordering(665) 00:15:00.993 fused_ordering(666) 00:15:00.993 fused_ordering(667) 00:15:00.993 fused_ordering(668) 00:15:00.993 fused_ordering(669) 00:15:00.993 fused_ordering(670) 00:15:00.993 fused_ordering(671) 00:15:00.993 fused_ordering(672) 00:15:00.993 fused_ordering(673) 00:15:00.993 fused_ordering(674) 00:15:00.993 fused_ordering(675) 00:15:00.993 fused_ordering(676) 00:15:00.993 fused_ordering(677) 00:15:00.993 fused_ordering(678) 00:15:00.993 fused_ordering(679) 00:15:00.993 fused_ordering(680) 00:15:00.993 fused_ordering(681) 00:15:00.993 fused_ordering(682) 00:15:00.993 fused_ordering(683) 00:15:00.993 fused_ordering(684) 00:15:00.993 fused_ordering(685) 00:15:00.993 fused_ordering(686) 00:15:00.993 fused_ordering(687) 00:15:00.993 fused_ordering(688) 00:15:00.993 fused_ordering(689) 00:15:00.993 fused_ordering(690) 00:15:00.993 fused_ordering(691) 00:15:00.993 fused_ordering(692) 00:15:00.993 fused_ordering(693) 00:15:00.993 fused_ordering(694) 00:15:00.993 fused_ordering(695) 00:15:00.993 fused_ordering(696) 00:15:00.993 fused_ordering(697) 00:15:00.993 fused_ordering(698) 00:15:00.993 fused_ordering(699) 00:15:00.993 fused_ordering(700) 00:15:00.993 fused_ordering(701) 00:15:00.993 fused_ordering(702) 00:15:00.993 fused_ordering(703) 00:15:00.993 fused_ordering(704) 00:15:00.993 fused_ordering(705) 00:15:00.993 fused_ordering(706) 00:15:00.993 fused_ordering(707) 00:15:00.993 fused_ordering(708) 00:15:00.993 fused_ordering(709) 00:15:00.993 fused_ordering(710) 00:15:00.993 fused_ordering(711) 00:15:00.993 fused_ordering(712) 00:15:00.993 fused_ordering(713) 00:15:00.993 fused_ordering(714) 00:15:00.993 fused_ordering(715) 00:15:00.993 fused_ordering(716) 00:15:00.993 fused_ordering(717) 00:15:00.993 fused_ordering(718) 00:15:00.993 fused_ordering(719) 00:15:00.993 fused_ordering(720) 00:15:00.993 fused_ordering(721) 00:15:00.993 fused_ordering(722) 00:15:00.993 fused_ordering(723) 00:15:00.993 fused_ordering(724) 00:15:00.993 fused_ordering(725) 00:15:00.993 fused_ordering(726) 00:15:00.993 fused_ordering(727) 00:15:00.993 fused_ordering(728) 00:15:00.993 fused_ordering(729) 00:15:00.993 fused_ordering(730) 00:15:00.993 fused_ordering(731) 00:15:00.993 fused_ordering(732) 00:15:00.993 fused_ordering(733) 00:15:00.993 fused_ordering(734) 00:15:00.993 fused_ordering(735) 00:15:00.993 fused_ordering(736) 00:15:00.993 fused_ordering(737) 00:15:00.993 fused_ordering(738) 00:15:00.993 fused_ordering(739) 00:15:00.993 fused_ordering(740) 00:15:00.993 fused_ordering(741) 00:15:00.993 fused_ordering(742) 00:15:00.993 fused_ordering(743) 00:15:00.993 fused_ordering(744) 00:15:00.993 fused_ordering(745) 00:15:00.993 fused_ordering(746) 00:15:00.993 fused_ordering(747) 00:15:00.993 fused_ordering(748) 00:15:00.993 fused_ordering(749) 00:15:00.993 fused_ordering(750) 00:15:00.993 fused_ordering(751) 00:15:00.993 fused_ordering(752) 00:15:00.993 fused_ordering(753) 00:15:00.993 fused_ordering(754) 00:15:00.993 fused_ordering(755) 00:15:00.993 fused_ordering(756) 00:15:00.993 fused_ordering(757) 00:15:00.993 fused_ordering(758) 00:15:00.993 fused_ordering(759) 00:15:00.993 fused_ordering(760) 00:15:00.993 fused_ordering(761) 00:15:00.993 fused_ordering(762) 00:15:00.993 fused_ordering(763) 00:15:00.993 fused_ordering(764) 00:15:00.993 fused_ordering(765) 00:15:00.993 fused_ordering(766) 00:15:00.993 fused_ordering(767) 00:15:00.993 fused_ordering(768) 00:15:00.993 fused_ordering(769) 00:15:00.993 fused_ordering(770) 00:15:00.993 fused_ordering(771) 00:15:00.993 fused_ordering(772) 00:15:00.993 fused_ordering(773) 00:15:00.993 fused_ordering(774) 00:15:00.993 fused_ordering(775) 00:15:00.993 fused_ordering(776) 00:15:00.993 fused_ordering(777) 00:15:00.993 fused_ordering(778) 00:15:00.993 fused_ordering(779) 00:15:00.993 fused_ordering(780) 00:15:00.994 fused_ordering(781) 00:15:00.994 fused_ordering(782) 00:15:00.994 fused_ordering(783) 00:15:00.994 fused_ordering(784) 00:15:00.994 fused_ordering(785) 00:15:00.994 fused_ordering(786) 00:15:00.994 fused_ordering(787) 00:15:00.994 fused_ordering(788) 00:15:00.994 fused_ordering(789) 00:15:00.994 fused_ordering(790) 00:15:00.994 fused_ordering(791) 00:15:00.994 fused_ordering(792) 00:15:00.994 fused_ordering(793) 00:15:00.994 fused_ordering(794) 00:15:00.994 fused_ordering(795) 00:15:00.994 fused_ordering(796) 00:15:00.994 fused_ordering(797) 00:15:00.994 fused_ordering(798) 00:15:00.994 fused_ordering(799) 00:15:00.994 fused_ordering(800) 00:15:00.994 fused_ordering(801) 00:15:00.994 fused_ordering(802) 00:15:00.994 fused_ordering(803) 00:15:00.994 fused_ordering(804) 00:15:00.994 fused_ordering(805) 00:15:00.994 fused_ordering(806) 00:15:00.994 fused_ordering(807) 00:15:00.994 fused_ordering(808) 00:15:00.994 fused_ordering(809) 00:15:00.994 fused_ordering(810) 00:15:00.994 fused_ordering(811) 00:15:00.994 fused_ordering(812) 00:15:00.994 fused_ordering(813) 00:15:00.994 fused_ordering(814) 00:15:00.994 fused_ordering(815) 00:15:00.994 fused_ordering(816) 00:15:00.994 fused_ordering(817) 00:15:00.994 fused_ordering(818) 00:15:00.994 fused_ordering(819) 00:15:00.994 fused_ordering(820) 00:15:01.559 fused_ordering(821) 00:15:01.559 fused_ordering(822) 00:15:01.559 fused_ordering(823) 00:15:01.559 fused_ordering(824) 00:15:01.559 fused_ordering(825) 00:15:01.559 fused_ordering(826) 00:15:01.559 fused_ordering(827) 00:15:01.559 fused_ordering(828) 00:15:01.559 fused_ordering(829) 00:15:01.559 fused_ordering(830) 00:15:01.559 fused_ordering(831) 00:15:01.559 fused_ordering(832) 00:15:01.559 fused_ordering(833) 00:15:01.559 fused_ordering(834) 00:15:01.559 fused_ordering(835) 00:15:01.559 fused_ordering(836) 00:15:01.559 fused_ordering(837) 00:15:01.559 fused_ordering(838) 00:15:01.559 fused_ordering(839) 00:15:01.559 fused_ordering(840) 00:15:01.559 fused_ordering(841) 00:15:01.559 fused_ordering(842) 00:15:01.559 fused_ordering(843) 00:15:01.559 fused_ordering(844) 00:15:01.559 fused_ordering(845) 00:15:01.559 fused_ordering(846) 00:15:01.559 fused_ordering(847) 00:15:01.559 fused_ordering(848) 00:15:01.559 fused_ordering(849) 00:15:01.559 fused_ordering(850) 00:15:01.559 fused_ordering(851) 00:15:01.559 fused_ordering(852) 00:15:01.559 fused_ordering(853) 00:15:01.559 fused_ordering(854) 00:15:01.559 fused_ordering(855) 00:15:01.559 fused_ordering(856) 00:15:01.559 fused_ordering(857) 00:15:01.559 fused_ordering(858) 00:15:01.559 fused_ordering(859) 00:15:01.559 fused_ordering(860) 00:15:01.559 fused_ordering(861) 00:15:01.559 fused_ordering(862) 00:15:01.559 fused_ordering(863) 00:15:01.559 fused_ordering(864) 00:15:01.559 fused_ordering(865) 00:15:01.559 fused_ordering(866) 00:15:01.559 fused_ordering(867) 00:15:01.559 fused_ordering(868) 00:15:01.559 fused_ordering(869) 00:15:01.559 fused_ordering(870) 00:15:01.559 fused_ordering(871) 00:15:01.560 fused_ordering(872) 00:15:01.560 fused_ordering(873) 00:15:01.560 fused_ordering(874) 00:15:01.560 fused_ordering(875) 00:15:01.560 fused_ordering(876) 00:15:01.560 fused_ordering(877) 00:15:01.560 fused_ordering(878) 00:15:01.560 fused_ordering(879) 00:15:01.560 fused_ordering(880) 00:15:01.560 fused_ordering(881) 00:15:01.560 fused_ordering(882) 00:15:01.560 fused_ordering(883) 00:15:01.560 fused_ordering(884) 00:15:01.560 fused_ordering(885) 00:15:01.560 fused_ordering(886) 00:15:01.560 fused_ordering(887) 00:15:01.560 fused_ordering(888) 00:15:01.560 fused_ordering(889) 00:15:01.560 fused_ordering(890) 00:15:01.560 fused_ordering(891) 00:15:01.560 fused_ordering(892) 00:15:01.560 fused_ordering(893) 00:15:01.560 fused_ordering(894) 00:15:01.560 fused_ordering(895) 00:15:01.560 fused_ordering(896) 00:15:01.560 fused_ordering(897) 00:15:01.560 fused_ordering(898) 00:15:01.560 fused_ordering(899) 00:15:01.560 fused_ordering(900) 00:15:01.560 fused_ordering(901) 00:15:01.560 fused_ordering(902) 00:15:01.560 fused_ordering(903) 00:15:01.560 fused_ordering(904) 00:15:01.560 fused_ordering(905) 00:15:01.560 fused_ordering(906) 00:15:01.560 fused_ordering(907) 00:15:01.560 fused_ordering(908) 00:15:01.560 fused_ordering(909) 00:15:01.560 fused_ordering(910) 00:15:01.560 fused_ordering(911) 00:15:01.560 fused_ordering(912) 00:15:01.560 fused_ordering(913) 00:15:01.560 fused_ordering(914) 00:15:01.560 fused_ordering(915) 00:15:01.560 fused_ordering(916) 00:15:01.560 fused_ordering(917) 00:15:01.560 fused_ordering(918) 00:15:01.560 fused_ordering(919) 00:15:01.560 fused_ordering(920) 00:15:01.560 fused_ordering(921) 00:15:01.560 fused_ordering(922) 00:15:01.560 fused_ordering(923) 00:15:01.560 fused_ordering(924) 00:15:01.560 fused_ordering(925) 00:15:01.560 fused_ordering(926) 00:15:01.560 fused_ordering(927) 00:15:01.560 fused_ordering(928) 00:15:01.560 fused_ordering(929) 00:15:01.560 fused_ordering(930) 00:15:01.560 fused_ordering(931) 00:15:01.560 fused_ordering(932) 00:15:01.560 fused_ordering(933) 00:15:01.560 fused_ordering(934) 00:15:01.560 fused_ordering(935) 00:15:01.560 fused_ordering(936) 00:15:01.560 fused_ordering(937) 00:15:01.560 fused_ordering(938) 00:15:01.560 fused_ordering(939) 00:15:01.560 fused_ordering(940) 00:15:01.560 fused_ordering(941) 00:15:01.560 fused_ordering(942) 00:15:01.560 fused_ordering(943) 00:15:01.560 fused_ordering(944) 00:15:01.560 fused_ordering(945) 00:15:01.560 fused_ordering(946) 00:15:01.560 fused_ordering(947) 00:15:01.560 fused_ordering(948) 00:15:01.560 fused_ordering(949) 00:15:01.560 fused_ordering(950) 00:15:01.560 fused_ordering(951) 00:15:01.560 fused_ordering(952) 00:15:01.560 fused_ordering(953) 00:15:01.560 fused_ordering(954) 00:15:01.560 fused_ordering(955) 00:15:01.560 fused_ordering(956) 00:15:01.560 fused_ordering(957) 00:15:01.560 fused_ordering(958) 00:15:01.560 fused_ordering(959) 00:15:01.560 fused_ordering(960) 00:15:01.560 fused_ordering(961) 00:15:01.560 fused_ordering(962) 00:15:01.560 fused_ordering(963) 00:15:01.560 fused_ordering(964) 00:15:01.560 fused_ordering(965) 00:15:01.560 fused_ordering(966) 00:15:01.560 fused_ordering(967) 00:15:01.560 fused_ordering(968) 00:15:01.560 fused_ordering(969) 00:15:01.560 fused_ordering(970) 00:15:01.560 fused_ordering(971) 00:15:01.560 fused_ordering(972) 00:15:01.560 fused_ordering(973) 00:15:01.560 fused_ordering(974) 00:15:01.560 fused_ordering(975) 00:15:01.560 fused_ordering(976) 00:15:01.560 fused_ordering(977) 00:15:01.560 fused_ordering(978) 00:15:01.560 fused_ordering(979) 00:15:01.560 fused_ordering(980) 00:15:01.560 fused_ordering(981) 00:15:01.560 fused_ordering(982) 00:15:01.560 fused_ordering(983) 00:15:01.560 fused_ordering(984) 00:15:01.560 fused_ordering(985) 00:15:01.560 fused_ordering(986) 00:15:01.560 fused_ordering(987) 00:15:01.560 fused_ordering(988) 00:15:01.560 fused_ordering(989) 00:15:01.560 fused_ordering(990) 00:15:01.560 fused_ordering(991) 00:15:01.560 fused_ordering(992) 00:15:01.560 fused_ordering(993) 00:15:01.560 fused_ordering(994) 00:15:01.560 fused_ordering(995) 00:15:01.560 fused_ordering(996) 00:15:01.560 fused_ordering(997) 00:15:01.560 fused_ordering(998) 00:15:01.560 fused_ordering(999) 00:15:01.560 fused_ordering(1000) 00:15:01.560 fused_ordering(1001) 00:15:01.560 fused_ordering(1002) 00:15:01.560 fused_ordering(1003) 00:15:01.560 fused_ordering(1004) 00:15:01.560 fused_ordering(1005) 00:15:01.560 fused_ordering(1006) 00:15:01.560 fused_ordering(1007) 00:15:01.560 fused_ordering(1008) 00:15:01.560 fused_ordering(1009) 00:15:01.560 fused_ordering(1010) 00:15:01.560 fused_ordering(1011) 00:15:01.560 fused_ordering(1012) 00:15:01.560 fused_ordering(1013) 00:15:01.560 fused_ordering(1014) 00:15:01.560 fused_ordering(1015) 00:15:01.560 fused_ordering(1016) 00:15:01.560 fused_ordering(1017) 00:15:01.560 fused_ordering(1018) 00:15:01.560 fused_ordering(1019) 00:15:01.560 fused_ordering(1020) 00:15:01.560 fused_ordering(1021) 00:15:01.560 fused_ordering(1022) 00:15:01.560 fused_ordering(1023) 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:01.560 rmmod nvme_tcp 00:15:01.560 rmmod nvme_fabrics 00:15:01.560 rmmod nvme_keyring 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3139073 ']' 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3139073 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3139073 ']' 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3139073 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139073 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139073' 00:15:01.560 killing process with pid 3139073 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3139073 00:15:01.560 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3139073 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.819 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:04.358 00:15:04.358 real 0m7.596s 00:15:04.358 user 0m5.053s 00:15:04.358 sys 0m3.285s 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:04.358 ************************************ 00:15:04.358 END TEST nvmf_fused_ordering 00:15:04.358 ************************************ 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.358 ************************************ 00:15:04.358 START TEST nvmf_ns_masking 00:15:04.358 ************************************ 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:04.358 * Looking for test storage... 00:15:04.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:04.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.358 --rc genhtml_branch_coverage=1 00:15:04.358 --rc genhtml_function_coverage=1 00:15:04.358 --rc genhtml_legend=1 00:15:04.358 --rc geninfo_all_blocks=1 00:15:04.358 --rc geninfo_unexecuted_blocks=1 00:15:04.358 00:15:04.358 ' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:04.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.358 --rc genhtml_branch_coverage=1 00:15:04.358 --rc genhtml_function_coverage=1 00:15:04.358 --rc genhtml_legend=1 00:15:04.358 --rc geninfo_all_blocks=1 00:15:04.358 --rc geninfo_unexecuted_blocks=1 00:15:04.358 00:15:04.358 ' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:04.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.358 --rc genhtml_branch_coverage=1 00:15:04.358 --rc genhtml_function_coverage=1 00:15:04.358 --rc genhtml_legend=1 00:15:04.358 --rc geninfo_all_blocks=1 00:15:04.358 --rc geninfo_unexecuted_blocks=1 00:15:04.358 00:15:04.358 ' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:04.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.358 --rc genhtml_branch_coverage=1 00:15:04.358 --rc genhtml_function_coverage=1 00:15:04.358 --rc genhtml_legend=1 00:15:04.358 --rc geninfo_all_blocks=1 00:15:04.358 --rc geninfo_unexecuted_blocks=1 00:15:04.358 00:15:04.358 ' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.358 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:04.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7563b68a-7b58-47f4-bfd5-381c9c3f40c9 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=b30ced37-9d91-4a78-8946-a4d5bff24217 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=eb00daae-258b-418c-aa67-234cd4119bf1 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:04.359 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:06.263 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:06.263 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:06.263 Found net devices under 0000:09:00.0: cvl_0_0 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:06.263 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:06.264 Found net devices under 0000:09:00.1: cvl_0_1 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:06.264 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:06.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:06.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:15:06.522 00:15:06.522 --- 10.0.0.2 ping statistics --- 00:15:06.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.522 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:06.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:06.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:15:06.522 00:15:06.522 --- 10.0.0.1 ping statistics --- 00:15:06.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:06.522 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:06.522 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3141432 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3141432 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3141432 ']' 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.522 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:06.522 [2024-11-25 13:14:04.085155] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:15:06.522 [2024-11-25 13:14:04.085246] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.522 [2024-11-25 13:14:04.158733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.781 [2024-11-25 13:14:04.218200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.781 [2024-11-25 13:14:04.218250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.781 [2024-11-25 13:14:04.218277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.781 [2024-11-25 13:14:04.218288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.781 [2024-11-25 13:14:04.218298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.781 [2024-11-25 13:14:04.218935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.781 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.781 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:06.781 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.781 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.781 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:06.781 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.781 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:07.038 [2024-11-25 13:14:04.665495] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.038 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:07.038 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:07.039 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:07.604 Malloc1 00:15:07.604 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:07.862 Malloc2 00:15:07.862 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:08.119 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:08.377 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.634 [2024-11-25 13:14:06.183729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.634 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:08.634 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eb00daae-258b-418c-aa67-234cd4119bf1 -a 10.0.0.2 -s 4420 -i 4 00:15:08.892 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.892 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:08.892 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.892 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:08.892 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:10.792 [ 0]:0x1 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fe3ab5fdbd1340a79f7f8387cc7021a2 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fe3ab5fdbd1340a79f7f8387cc7021a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.792 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:11.358 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:11.358 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.358 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.358 [ 0]:0x1 00:15:11.358 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.358 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.358 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fe3ab5fdbd1340a79f7f8387cc7021a2 00:15:11.358 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fe3ab5fdbd1340a79f7f8387cc7021a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.358 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.359 [ 1]:0x2 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1a3a017ad9e43d29bfc008d1f80807b 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1a3a017ad9e43d29bfc008d1f80807b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.359 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.616 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:11.873 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:11.873 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eb00daae-258b-418c-aa67-234cd4119bf1 -a 10.0.0.2 -s 4420 -i 4 00:15:12.176 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:12.176 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:12.176 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.176 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:12.176 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:12.176 13:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.099 [ 0]:0x2 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.099 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.357 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1a3a017ad9e43d29bfc008d1f80807b 00:15:14.357 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1a3a017ad9e43d29bfc008d1f80807b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.357 13:14:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.615 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:14.615 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.615 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:14.615 [ 0]:0x1 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fe3ab5fdbd1340a79f7f8387cc7021a2 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fe3ab5fdbd1340a79f7f8387cc7021a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:14.616 [ 1]:0x2 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1a3a017ad9e43d29bfc008d1f80807b 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1a3a017ad9e43d29bfc008d1f80807b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:14.616 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:15.180 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:15.181 [ 0]:0x2 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1a3a017ad9e43d29bfc008d1f80807b 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1a3a017ad9e43d29bfc008d1f80807b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.181 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:15.439 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:15.439 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I eb00daae-258b-418c-aa67-234cd4119bf1 -a 10.0.0.2 -s 4420 -i 4 00:15:15.439 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:15.439 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:15.439 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.696 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:15.696 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:15.696 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:17.594 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:17.594 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:17.594 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.594 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:17.594 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.595 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:17.595 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:17.595 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:17.595 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:17.595 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:17.595 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:17.595 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.595 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:17.852 [ 0]:0x1 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fe3ab5fdbd1340a79f7f8387cc7021a2 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fe3ab5fdbd1340a79f7f8387cc7021a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:17.852 [ 1]:0x2 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1a3a017ad9e43d29bfc008d1f80807b 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1a3a017ad9e43d29bfc008d1f80807b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:17.852 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.110 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:18.368 [ 0]:0x2 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1a3a017ad9e43d29bfc008d1f80807b 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1a3a017ad9e43d29bfc008d1f80807b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:18.368 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:18.626 [2024-11-25 13:14:16.125790] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:18.626 request: 00:15:18.626 { 00:15:18.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.626 "nsid": 2, 00:15:18.626 "host": "nqn.2016-06.io.spdk:host1", 00:15:18.626 "method": "nvmf_ns_remove_host", 00:15:18.626 "req_id": 1 00:15:18.626 } 00:15:18.626 Got JSON-RPC error response 00:15:18.626 response: 00:15:18.626 { 00:15:18.626 "code": -32602, 00:15:18.626 "message": "Invalid parameters" 00:15:18.626 } 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.626 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:18.627 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:18.627 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:18.627 [ 0]:0x2 00:15:18.627 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:18.627 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c1a3a017ad9e43d29bfc008d1f80807b 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c1a3a017ad9e43d29bfc008d1f80807b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3143058 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3143058 /var/tmp/host.sock 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3143058 ']' 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:18.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.884 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:18.884 [2024-11-25 13:14:16.472900] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:15:18.884 [2024-11-25 13:14:16.472981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143058 ] 00:15:18.884 [2024-11-25 13:14:16.538021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.143 [2024-11-25 13:14:16.595922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.402 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.402 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:19.402 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.675 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:19.932 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7563b68a-7b58-47f4-bfd5-381c9c3f40c9 00:15:19.932 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:19.932 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7563B68A7B5847F4BFD5381C9C3F40C9 -i 00:15:20.190 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid b30ced37-9d91-4a78-8946-a4d5bff24217 00:15:20.190 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:20.190 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g B30CED379D914A788946A4D5BFF24217 -i 00:15:20.447 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:20.704 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:20.962 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:20.962 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:21.528 nvme0n1 00:15:21.528 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:21.528 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:21.785 nvme1n2 00:15:21.785 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:21.785 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:21.785 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:21.785 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:21.785 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:22.042 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:22.042 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:22.042 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:22.042 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:22.300 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7563b68a-7b58-47f4-bfd5-381c9c3f40c9 == \7\5\6\3\b\6\8\a\-\7\b\5\8\-\4\7\f\4\-\b\f\d\5\-\3\8\1\c\9\c\3\f\4\0\c\9 ]] 00:15:22.300 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:22.300 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:22.300 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:22.558 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ b30ced37-9d91-4a78-8946-a4d5bff24217 == \b\3\0\c\e\d\3\7\-\9\d\9\1\-\4\a\7\8\-\8\9\4\6\-\a\4\d\5\b\f\f\2\4\2\1\7 ]] 00:15:22.558 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7563b68a-7b58-47f4-bfd5-381c9c3f40c9 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7563B68A7B5847F4BFD5381C9C3F40C9 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7563B68A7B5847F4BFD5381C9C3F40C9 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:23.125 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7563B68A7B5847F4BFD5381C9C3F40C9 00:15:23.383 [2024-11-25 13:14:21.027969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:23.383 [2024-11-25 13:14:21.028006] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:23.383 [2024-11-25 13:14:21.028036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:23.383 request: 00:15:23.383 { 00:15:23.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.383 "namespace": { 00:15:23.383 "bdev_name": "invalid", 00:15:23.383 "nsid": 1, 00:15:23.383 "nguid": "7563B68A7B5847F4BFD5381C9C3F40C9", 00:15:23.383 "no_auto_visible": false 00:15:23.383 }, 00:15:23.383 "method": "nvmf_subsystem_add_ns", 00:15:23.383 "req_id": 1 00:15:23.383 } 00:15:23.383 Got JSON-RPC error response 00:15:23.383 response: 00:15:23.383 { 00:15:23.383 "code": -32602, 00:15:23.383 "message": "Invalid parameters" 00:15:23.383 } 00:15:23.641 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:23.641 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:23.641 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:23.641 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:23.641 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7563b68a-7b58-47f4-bfd5-381c9c3f40c9 00:15:23.641 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:23.641 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7563B68A7B5847F4BFD5381C9C3F40C9 -i 00:15:23.899 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:25.798 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:25.798 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:25.798 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3143058 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3143058 ']' 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3143058 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143058 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143058' 00:15:26.056 killing process with pid 3143058 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3143058 00:15:26.056 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3143058 00:15:26.621 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.879 rmmod nvme_tcp 00:15:26.879 rmmod nvme_fabrics 00:15:26.879 rmmod nvme_keyring 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3141432 ']' 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3141432 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3141432 ']' 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3141432 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3141432 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3141432' 00:15:26.879 killing process with pid 3141432 00:15:26.879 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3141432 00:15:26.880 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3141432 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.140 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:29.677 00:15:29.677 real 0m25.241s 00:15:29.677 user 0m36.652s 00:15:29.677 sys 0m4.761s 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:29.677 ************************************ 00:15:29.677 END TEST nvmf_ns_masking 00:15:29.677 ************************************ 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:29.677 ************************************ 00:15:29.677 START TEST nvmf_nvme_cli 00:15:29.677 ************************************ 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:29.677 * Looking for test storage... 00:15:29.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.677 --rc genhtml_branch_coverage=1 00:15:29.677 --rc genhtml_function_coverage=1 00:15:29.677 --rc genhtml_legend=1 00:15:29.677 --rc geninfo_all_blocks=1 00:15:29.677 --rc geninfo_unexecuted_blocks=1 00:15:29.677 00:15:29.677 ' 00:15:29.677 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.677 --rc genhtml_branch_coverage=1 00:15:29.677 --rc genhtml_function_coverage=1 00:15:29.677 --rc genhtml_legend=1 00:15:29.677 --rc geninfo_all_blocks=1 00:15:29.677 --rc geninfo_unexecuted_blocks=1 00:15:29.677 00:15:29.677 ' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:29.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.678 --rc genhtml_branch_coverage=1 00:15:29.678 --rc genhtml_function_coverage=1 00:15:29.678 --rc genhtml_legend=1 00:15:29.678 --rc geninfo_all_blocks=1 00:15:29.678 --rc geninfo_unexecuted_blocks=1 00:15:29.678 00:15:29.678 ' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:29.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:29.678 --rc genhtml_branch_coverage=1 00:15:29.678 --rc genhtml_function_coverage=1 00:15:29.678 --rc genhtml_legend=1 00:15:29.678 --rc geninfo_all_blocks=1 00:15:29.678 --rc geninfo_unexecuted_blocks=1 00:15:29.678 00:15:29.678 ' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:29.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:29.678 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:31.582 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:31.582 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:31.582 Found net devices under 0000:09:00.0: cvl_0_0 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:31.582 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:31.583 Found net devices under 0000:09:00.1: cvl_0_1 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:31.583 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:31.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:15:31.841 00:15:31.841 --- 10.0.0.2 ping statistics --- 00:15:31.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.841 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:31.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:15:31.841 00:15:31.841 --- 10.0.0.1 ping statistics --- 00:15:31.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.841 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3145974 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3145974 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3145974 ']' 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.841 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.841 [2024-11-25 13:14:29.375625] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:15:31.841 [2024-11-25 13:14:29.375716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.841 [2024-11-25 13:14:29.448722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.099 [2024-11-25 13:14:29.512125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.099 [2024-11-25 13:14:29.512168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.099 [2024-11-25 13:14:29.512196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.099 [2024-11-25 13:14:29.512207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.099 [2024-11-25 13:14:29.512216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.099 [2024-11-25 13:14:29.513858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.099 [2024-11-25 13:14:29.513883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.099 [2024-11-25 13:14:29.513938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.099 [2024-11-25 13:14:29.513941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.099 [2024-11-25 13:14:29.682921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.099 Malloc0 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.099 Malloc1 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.099 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.357 [2024-11-25 13:14:29.774405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.357 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:15:32.357 00:15:32.357 Discovery Log Number of Records 2, Generation counter 2 00:15:32.357 =====Discovery Log Entry 0====== 00:15:32.357 trtype: tcp 00:15:32.357 adrfam: ipv4 00:15:32.357 subtype: current discovery subsystem 00:15:32.357 treq: not required 00:15:32.357 portid: 0 00:15:32.357 trsvcid: 4420 00:15:32.357 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:32.357 traddr: 10.0.0.2 00:15:32.358 eflags: explicit discovery connections, duplicate discovery information 00:15:32.358 sectype: none 00:15:32.358 =====Discovery Log Entry 1====== 00:15:32.358 trtype: tcp 00:15:32.358 adrfam: ipv4 00:15:32.358 subtype: nvme subsystem 00:15:32.358 treq: not required 00:15:32.358 portid: 0 00:15:32.358 trsvcid: 4420 00:15:32.358 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:32.358 traddr: 10.0.0.2 00:15:32.358 eflags: none 00:15:32.358 sectype: none 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:32.358 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.291 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:33.291 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:33.291 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.291 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:33.291 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:33.291 13:14:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:35.185 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:35.185 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:35.185 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:35.185 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:35.185 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:35.186 /dev/nvme0n2 ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:35.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.186 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.186 rmmod nvme_tcp 00:15:35.186 rmmod nvme_fabrics 00:15:35.186 rmmod nvme_keyring 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3145974 ']' 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3145974 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3145974 ']' 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3145974 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3145974 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3145974' 00:15:35.444 killing process with pid 3145974 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3145974 00:15:35.444 13:14:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3145974 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.702 13:14:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.606 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:37.607 00:15:37.607 real 0m8.445s 00:15:37.607 user 0m15.358s 00:15:37.607 sys 0m2.369s 00:15:37.607 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.607 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:37.607 ************************************ 00:15:37.607 END TEST nvmf_nvme_cli 00:15:37.607 ************************************ 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.866 ************************************ 00:15:37.866 START TEST nvmf_vfio_user 00:15:37.866 ************************************ 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:37.866 * Looking for test storage... 00:15:37.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:37.866 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:37.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.867 --rc genhtml_branch_coverage=1 00:15:37.867 --rc genhtml_function_coverage=1 00:15:37.867 --rc genhtml_legend=1 00:15:37.867 --rc geninfo_all_blocks=1 00:15:37.867 --rc geninfo_unexecuted_blocks=1 00:15:37.867 00:15:37.867 ' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:37.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.867 --rc genhtml_branch_coverage=1 00:15:37.867 --rc genhtml_function_coverage=1 00:15:37.867 --rc genhtml_legend=1 00:15:37.867 --rc geninfo_all_blocks=1 00:15:37.867 --rc geninfo_unexecuted_blocks=1 00:15:37.867 00:15:37.867 ' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:37.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.867 --rc genhtml_branch_coverage=1 00:15:37.867 --rc genhtml_function_coverage=1 00:15:37.867 --rc genhtml_legend=1 00:15:37.867 --rc geninfo_all_blocks=1 00:15:37.867 --rc geninfo_unexecuted_blocks=1 00:15:37.867 00:15:37.867 ' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:37.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.867 --rc genhtml_branch_coverage=1 00:15:37.867 --rc genhtml_function_coverage=1 00:15:37.867 --rc genhtml_legend=1 00:15:37.867 --rc geninfo_all_blocks=1 00:15:37.867 --rc geninfo_unexecuted_blocks=1 00:15:37.867 00:15:37.867 ' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3146902 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3146902' 00:15:37.867 Process pid: 3146902 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3146902 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3146902 ']' 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.867 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.867 [2024-11-25 13:14:35.505706] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:15:37.867 [2024-11-25 13:14:35.505801] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.126 [2024-11-25 13:14:35.578022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.126 [2024-11-25 13:14:35.638032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.126 [2024-11-25 13:14:35.638079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.126 [2024-11-25 13:14:35.638102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.126 [2024-11-25 13:14:35.638129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.126 [2024-11-25 13:14:35.638140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.126 [2024-11-25 13:14:35.639754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.126 [2024-11-25 13:14:35.639818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.126 [2024-11-25 13:14:35.639882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.126 [2024-11-25 13:14:35.639886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.126 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.126 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:38.126 13:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:39.498 13:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:39.498 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:39.498 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:39.498 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:39.498 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:39.498 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:39.756 Malloc1 00:15:39.756 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:40.013 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:40.317 13:14:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:40.603 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:40.603 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:40.603 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:40.860 Malloc2 00:15:40.861 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:41.427 13:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:41.427 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:41.685 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:41.685 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:41.685 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:41.685 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:41.685 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:41.685 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:41.945 [2024-11-25 13:14:39.346094] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:15:41.945 [2024-11-25 13:14:39.346136] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147349 ] 00:15:41.945 [2024-11-25 13:14:39.394125] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:41.945 [2024-11-25 13:14:39.406802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:41.945 [2024-11-25 13:14:39.406835] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8538f77000 00:15:41.945 [2024-11-25 13:14:39.407794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.945 [2024-11-25 13:14:39.408794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.945 [2024-11-25 13:14:39.409796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.945 [2024-11-25 13:14:39.410806] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.945 [2024-11-25 13:14:39.411807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.945 [2024-11-25 13:14:39.412817] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.945 [2024-11-25 13:14:39.413821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:41.945 [2024-11-25 13:14:39.414826] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:41.945 [2024-11-25 13:14:39.415829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:41.945 [2024-11-25 13:14:39.415858] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8538f6c000 00:15:41.945 [2024-11-25 13:14:39.417022] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:41.945 [2024-11-25 13:14:39.432642] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:41.945 [2024-11-25 13:14:39.432680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:41.945 [2024-11-25 13:14:39.434937] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:41.945 [2024-11-25 13:14:39.434987] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:41.945 [2024-11-25 13:14:39.435071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:41.945 [2024-11-25 13:14:39.435094] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:41.945 [2024-11-25 13:14:39.435105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:41.945 [2024-11-25 13:14:39.435937] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:41.945 [2024-11-25 13:14:39.435957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:41.945 [2024-11-25 13:14:39.435969] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:41.946 [2024-11-25 13:14:39.436947] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:41.946 [2024-11-25 13:14:39.436967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:41.946 [2024-11-25 13:14:39.436981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:41.946 [2024-11-25 13:14:39.437954] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:41.946 [2024-11-25 13:14:39.437973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:41.946 [2024-11-25 13:14:39.438956] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:41.946 [2024-11-25 13:14:39.438975] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:41.946 [2024-11-25 13:14:39.438984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:41.946 [2024-11-25 13:14:39.438995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:41.946 [2024-11-25 13:14:39.439105] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:41.946 [2024-11-25 13:14:39.439112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:41.946 [2024-11-25 13:14:39.439120] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:41.946 [2024-11-25 13:14:39.439965] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:41.946 [2024-11-25 13:14:39.440963] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:41.946 [2024-11-25 13:14:39.441970] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:41.946 [2024-11-25 13:14:39.442968] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.946 [2024-11-25 13:14:39.443065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:41.946 [2024-11-25 13:14:39.443981] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:41.946 [2024-11-25 13:14:39.443999] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:41.946 [2024-11-25 13:14:39.444008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444032] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:41.946 [2024-11-25 13:14:39.444050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444077] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.946 [2024-11-25 13:14:39.444087] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.946 [2024-11-25 13:14:39.444093] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.946 [2024-11-25 13:14:39.444110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.946 [2024-11-25 13:14:39.444167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:41.946 [2024-11-25 13:14:39.444183] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:41.946 [2024-11-25 13:14:39.444195] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:41.946 [2024-11-25 13:14:39.444203] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:41.946 [2024-11-25 13:14:39.444210] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:41.946 [2024-11-25 13:14:39.444218] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:41.946 [2024-11-25 13:14:39.444225] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:41.946 [2024-11-25 13:14:39.444232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:41.946 [2024-11-25 13:14:39.444273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:41.946 [2024-11-25 13:14:39.444365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.946 [2024-11-25 13:14:39.444387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.946 [2024-11-25 13:14:39.444411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.946 [2024-11-25 13:14:39.444424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.946 [2024-11-25 13:14:39.444433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444449] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:41.946 [2024-11-25 13:14:39.444479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:41.946 [2024-11-25 13:14:39.444490] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:41.946 [2024-11-25 13:14:39.444498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:41.946 [2024-11-25 13:14:39.444542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:41.946 [2024-11-25 13:14:39.444619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444648] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:41.946 [2024-11-25 13:14:39.444655] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:41.946 [2024-11-25 13:14:39.444661] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.946 [2024-11-25 13:14:39.444670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:41.946 [2024-11-25 13:14:39.444684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:41.946 [2024-11-25 13:14:39.444700] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:41.946 [2024-11-25 13:14:39.444727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444754] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.946 [2024-11-25 13:14:39.444762] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.946 [2024-11-25 13:14:39.444770] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.946 [2024-11-25 13:14:39.444780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.946 [2024-11-25 13:14:39.444807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:41.946 [2024-11-25 13:14:39.444828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444854] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:41.946 [2024-11-25 13:14:39.444862] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.946 [2024-11-25 13:14:39.444867] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.946 [2024-11-25 13:14:39.444876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.946 [2024-11-25 13:14:39.444890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:41.946 [2024-11-25 13:14:39.444903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:41.946 [2024-11-25 13:14:39.444947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:41.947 [2024-11-25 13:14:39.444955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:41.947 [2024-11-25 13:14:39.444962] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:41.947 [2024-11-25 13:14:39.444969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:41.947 [2024-11-25 13:14:39.444977] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:41.947 [2024-11-25 13:14:39.445000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:41.947 [2024-11-25 13:14:39.445018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:41.947 [2024-11-25 13:14:39.445037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:41.947 [2024-11-25 13:14:39.445049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:41.947 [2024-11-25 13:14:39.445064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:41.947 [2024-11-25 13:14:39.445075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:41.947 [2024-11-25 13:14:39.445095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:41.947 [2024-11-25 13:14:39.445108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:41.947 [2024-11-25 13:14:39.445131] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:41.947 [2024-11-25 13:14:39.445141] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:41.947 [2024-11-25 13:14:39.445147] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:41.947 [2024-11-25 13:14:39.445153] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:41.947 [2024-11-25 13:14:39.445158] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:41.947 [2024-11-25 13:14:39.445167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:41.947 [2024-11-25 13:14:39.445179] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:41.947 [2024-11-25 13:14:39.445186] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:41.947 [2024-11-25 13:14:39.445192] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.947 [2024-11-25 13:14:39.445201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:41.947 [2024-11-25 13:14:39.445211] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:41.947 [2024-11-25 13:14:39.445219] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:41.947 [2024-11-25 13:14:39.445225] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.947 [2024-11-25 13:14:39.445233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:41.947 [2024-11-25 13:14:39.445244] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:41.947 [2024-11-25 13:14:39.445252] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:41.947 [2024-11-25 13:14:39.445257] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:41.947 [2024-11-25 13:14:39.445265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:41.947 [2024-11-25 13:14:39.445277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:41.947 [2024-11-25 13:14:39.445320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:41.947 [2024-11-25 13:14:39.445340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:41.947 [2024-11-25 13:14:39.445368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:41.947 ===================================================== 00:15:41.947 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:41.947 ===================================================== 00:15:41.947 Controller Capabilities/Features 00:15:41.947 ================================ 00:15:41.947 Vendor ID: 4e58 00:15:41.947 Subsystem Vendor ID: 4e58 00:15:41.947 Serial Number: SPDK1 00:15:41.947 Model Number: SPDK bdev Controller 00:15:41.947 Firmware Version: 25.01 00:15:41.947 Recommended Arb Burst: 6 00:15:41.947 IEEE OUI Identifier: 8d 6b 50 00:15:41.947 Multi-path I/O 00:15:41.947 May have multiple subsystem ports: Yes 00:15:41.947 May have multiple controllers: Yes 00:15:41.947 Associated with SR-IOV VF: No 00:15:41.947 Max Data Transfer Size: 131072 00:15:41.947 Max Number of Namespaces: 32 00:15:41.947 Max Number of I/O Queues: 127 00:15:41.947 NVMe Specification Version (VS): 1.3 00:15:41.947 NVMe Specification Version (Identify): 1.3 00:15:41.947 Maximum Queue Entries: 256 00:15:41.947 Contiguous Queues Required: Yes 00:15:41.947 Arbitration Mechanisms Supported 00:15:41.947 Weighted Round Robin: Not Supported 00:15:41.947 Vendor Specific: Not Supported 00:15:41.947 Reset Timeout: 15000 ms 00:15:41.947 Doorbell Stride: 4 bytes 00:15:41.947 NVM Subsystem Reset: Not Supported 00:15:41.947 Command Sets Supported 00:15:41.947 NVM Command Set: Supported 00:15:41.947 Boot Partition: Not Supported 00:15:41.947 Memory Page Size Minimum: 4096 bytes 00:15:41.947 Memory Page Size Maximum: 4096 bytes 00:15:41.947 Persistent Memory Region: Not Supported 00:15:41.947 Optional Asynchronous Events Supported 00:15:41.947 Namespace Attribute Notices: Supported 00:15:41.947 Firmware Activation Notices: Not Supported 00:15:41.947 ANA Change Notices: Not Supported 00:15:41.947 PLE Aggregate Log Change Notices: Not Supported 00:15:41.947 LBA Status Info Alert Notices: Not Supported 00:15:41.947 EGE Aggregate Log Change Notices: Not Supported 00:15:41.947 Normal NVM Subsystem Shutdown event: Not Supported 00:15:41.947 Zone Descriptor Change Notices: Not Supported 00:15:41.947 Discovery Log Change Notices: Not Supported 00:15:41.947 Controller Attributes 00:15:41.947 128-bit Host Identifier: Supported 00:15:41.947 Non-Operational Permissive Mode: Not Supported 00:15:41.947 NVM Sets: Not Supported 00:15:41.947 Read Recovery Levels: Not Supported 00:15:41.947 Endurance Groups: Not Supported 00:15:41.947 Predictable Latency Mode: Not Supported 00:15:41.947 Traffic Based Keep ALive: Not Supported 00:15:41.947 Namespace Granularity: Not Supported 00:15:41.947 SQ Associations: Not Supported 00:15:41.947 UUID List: Not Supported 00:15:41.947 Multi-Domain Subsystem: Not Supported 00:15:41.947 Fixed Capacity Management: Not Supported 00:15:41.947 Variable Capacity Management: Not Supported 00:15:41.947 Delete Endurance Group: Not Supported 00:15:41.947 Delete NVM Set: Not Supported 00:15:41.947 Extended LBA Formats Supported: Not Supported 00:15:41.947 Flexible Data Placement Supported: Not Supported 00:15:41.947 00:15:41.947 Controller Memory Buffer Support 00:15:41.947 ================================ 00:15:41.947 Supported: No 00:15:41.947 00:15:41.947 Persistent Memory Region Support 00:15:41.947 ================================ 00:15:41.947 Supported: No 00:15:41.947 00:15:41.947 Admin Command Set Attributes 00:15:41.947 ============================ 00:15:41.947 Security Send/Receive: Not Supported 00:15:41.947 Format NVM: Not Supported 00:15:41.947 Firmware Activate/Download: Not Supported 00:15:41.947 Namespace Management: Not Supported 00:15:41.947 Device Self-Test: Not Supported 00:15:41.947 Directives: Not Supported 00:15:41.947 NVMe-MI: Not Supported 00:15:41.947 Virtualization Management: Not Supported 00:15:41.947 Doorbell Buffer Config: Not Supported 00:15:41.947 Get LBA Status Capability: Not Supported 00:15:41.947 Command & Feature Lockdown Capability: Not Supported 00:15:41.947 Abort Command Limit: 4 00:15:41.947 Async Event Request Limit: 4 00:15:41.947 Number of Firmware Slots: N/A 00:15:41.947 Firmware Slot 1 Read-Only: N/A 00:15:41.947 Firmware Activation Without Reset: N/A 00:15:41.947 Multiple Update Detection Support: N/A 00:15:41.947 Firmware Update Granularity: No Information Provided 00:15:41.947 Per-Namespace SMART Log: No 00:15:41.947 Asymmetric Namespace Access Log Page: Not Supported 00:15:41.947 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:41.947 Command Effects Log Page: Supported 00:15:41.947 Get Log Page Extended Data: Supported 00:15:41.947 Telemetry Log Pages: Not Supported 00:15:41.947 Persistent Event Log Pages: Not Supported 00:15:41.947 Supported Log Pages Log Page: May Support 00:15:41.947 Commands Supported & Effects Log Page: Not Supported 00:15:41.947 Feature Identifiers & Effects Log Page:May Support 00:15:41.947 NVMe-MI Commands & Effects Log Page: May Support 00:15:41.947 Data Area 4 for Telemetry Log: Not Supported 00:15:41.947 Error Log Page Entries Supported: 128 00:15:41.947 Keep Alive: Supported 00:15:41.947 Keep Alive Granularity: 10000 ms 00:15:41.947 00:15:41.947 NVM Command Set Attributes 00:15:41.947 ========================== 00:15:41.947 Submission Queue Entry Size 00:15:41.947 Max: 64 00:15:41.947 Min: 64 00:15:41.947 Completion Queue Entry Size 00:15:41.947 Max: 16 00:15:41.947 Min: 16 00:15:41.947 Number of Namespaces: 32 00:15:41.947 Compare Command: Supported 00:15:41.948 Write Uncorrectable Command: Not Supported 00:15:41.948 Dataset Management Command: Supported 00:15:41.948 Write Zeroes Command: Supported 00:15:41.948 Set Features Save Field: Not Supported 00:15:41.948 Reservations: Not Supported 00:15:41.948 Timestamp: Not Supported 00:15:41.948 Copy: Supported 00:15:41.948 Volatile Write Cache: Present 00:15:41.948 Atomic Write Unit (Normal): 1 00:15:41.948 Atomic Write Unit (PFail): 1 00:15:41.948 Atomic Compare & Write Unit: 1 00:15:41.948 Fused Compare & Write: Supported 00:15:41.948 Scatter-Gather List 00:15:41.948 SGL Command Set: Supported (Dword aligned) 00:15:41.948 SGL Keyed: Not Supported 00:15:41.948 SGL Bit Bucket Descriptor: Not Supported 00:15:41.948 SGL Metadata Pointer: Not Supported 00:15:41.948 Oversized SGL: Not Supported 00:15:41.948 SGL Metadata Address: Not Supported 00:15:41.948 SGL Offset: Not Supported 00:15:41.948 Transport SGL Data Block: Not Supported 00:15:41.948 Replay Protected Memory Block: Not Supported 00:15:41.948 00:15:41.948 Firmware Slot Information 00:15:41.948 ========================= 00:15:41.948 Active slot: 1 00:15:41.948 Slot 1 Firmware Revision: 25.01 00:15:41.948 00:15:41.948 00:15:41.948 Commands Supported and Effects 00:15:41.948 ============================== 00:15:41.948 Admin Commands 00:15:41.948 -------------- 00:15:41.948 Get Log Page (02h): Supported 00:15:41.948 Identify (06h): Supported 00:15:41.948 Abort (08h): Supported 00:15:41.948 Set Features (09h): Supported 00:15:41.948 Get Features (0Ah): Supported 00:15:41.948 Asynchronous Event Request (0Ch): Supported 00:15:41.948 Keep Alive (18h): Supported 00:15:41.948 I/O Commands 00:15:41.948 ------------ 00:15:41.948 Flush (00h): Supported LBA-Change 00:15:41.948 Write (01h): Supported LBA-Change 00:15:41.948 Read (02h): Supported 00:15:41.948 Compare (05h): Supported 00:15:41.948 Write Zeroes (08h): Supported LBA-Change 00:15:41.948 Dataset Management (09h): Supported LBA-Change 00:15:41.948 Copy (19h): Supported LBA-Change 00:15:41.948 00:15:41.948 Error Log 00:15:41.948 ========= 00:15:41.948 00:15:41.948 Arbitration 00:15:41.948 =========== 00:15:41.948 Arbitration Burst: 1 00:15:41.948 00:15:41.948 Power Management 00:15:41.948 ================ 00:15:41.948 Number of Power States: 1 00:15:41.948 Current Power State: Power State #0 00:15:41.948 Power State #0: 00:15:41.948 Max Power: 0.00 W 00:15:41.948 Non-Operational State: Operational 00:15:41.948 Entry Latency: Not Reported 00:15:41.948 Exit Latency: Not Reported 00:15:41.948 Relative Read Throughput: 0 00:15:41.948 Relative Read Latency: 0 00:15:41.948 Relative Write Throughput: 0 00:15:41.948 Relative Write Latency: 0 00:15:41.948 Idle Power: Not Reported 00:15:41.948 Active Power: Not Reported 00:15:41.948 Non-Operational Permissive Mode: Not Supported 00:15:41.948 00:15:41.948 Health Information 00:15:41.948 ================== 00:15:41.948 Critical Warnings: 00:15:41.948 Available Spare Space: OK 00:15:41.948 Temperature: OK 00:15:41.948 Device Reliability: OK 00:15:41.948 Read Only: No 00:15:41.948 Volatile Memory Backup: OK 00:15:41.948 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:41.948 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:41.948 Available Spare: 0% 00:15:41.948 Available Sp[2024-11-25 13:14:39.445489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:41.948 [2024-11-25 13:14:39.445507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:41.948 [2024-11-25 13:14:39.445550] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:41.948 [2024-11-25 13:14:39.445569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.948 [2024-11-25 13:14:39.445604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.948 [2024-11-25 13:14:39.445618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.948 [2024-11-25 13:14:39.445628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.948 [2024-11-25 13:14:39.448315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:41.948 [2024-11-25 13:14:39.448336] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:41.948 [2024-11-25 13:14:39.449011] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.948 [2024-11-25 13:14:39.449098] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:41.948 [2024-11-25 13:14:39.449112] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:41.948 [2024-11-25 13:14:39.450021] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:41.948 [2024-11-25 13:14:39.450044] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:41.948 [2024-11-25 13:14:39.450097] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:41.948 [2024-11-25 13:14:39.453313] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:41.948 are Threshold: 0% 00:15:41.948 Life Percentage Used: 0% 00:15:41.948 Data Units Read: 0 00:15:41.948 Data Units Written: 0 00:15:41.948 Host Read Commands: 0 00:15:41.948 Host Write Commands: 0 00:15:41.948 Controller Busy Time: 0 minutes 00:15:41.948 Power Cycles: 0 00:15:41.948 Power On Hours: 0 hours 00:15:41.948 Unsafe Shutdowns: 0 00:15:41.948 Unrecoverable Media Errors: 0 00:15:41.948 Lifetime Error Log Entries: 0 00:15:41.948 Warning Temperature Time: 0 minutes 00:15:41.948 Critical Temperature Time: 0 minutes 00:15:41.948 00:15:41.948 Number of Queues 00:15:41.948 ================ 00:15:41.948 Number of I/O Submission Queues: 127 00:15:41.948 Number of I/O Completion Queues: 127 00:15:41.948 00:15:41.948 Active Namespaces 00:15:41.948 ================= 00:15:41.948 Namespace ID:1 00:15:41.948 Error Recovery Timeout: Unlimited 00:15:41.948 Command Set Identifier: NVM (00h) 00:15:41.948 Deallocate: Supported 00:15:41.948 Deallocated/Unwritten Error: Not Supported 00:15:41.948 Deallocated Read Value: Unknown 00:15:41.948 Deallocate in Write Zeroes: Not Supported 00:15:41.948 Deallocated Guard Field: 0xFFFF 00:15:41.948 Flush: Supported 00:15:41.948 Reservation: Supported 00:15:41.948 Namespace Sharing Capabilities: Multiple Controllers 00:15:41.948 Size (in LBAs): 131072 (0GiB) 00:15:41.948 Capacity (in LBAs): 131072 (0GiB) 00:15:41.948 Utilization (in LBAs): 131072 (0GiB) 00:15:41.948 NGUID: F858C66C246D48CAA9A5FA86131B80E8 00:15:41.948 UUID: f858c66c-246d-48ca-a9a5-fa86131b80e8 00:15:41.948 Thin Provisioning: Not Supported 00:15:41.948 Per-NS Atomic Units: Yes 00:15:41.948 Atomic Boundary Size (Normal): 0 00:15:41.948 Atomic Boundary Size (PFail): 0 00:15:41.948 Atomic Boundary Offset: 0 00:15:41.948 Maximum Single Source Range Length: 65535 00:15:41.948 Maximum Copy Length: 65535 00:15:41.948 Maximum Source Range Count: 1 00:15:41.948 NGUID/EUI64 Never Reused: No 00:15:41.948 Namespace Write Protected: No 00:15:41.948 Number of LBA Formats: 1 00:15:41.948 Current LBA Format: LBA Format #00 00:15:41.948 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:41.948 00:15:41.948 13:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:42.207 [2024-11-25 13:14:39.703220] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.481 Initializing NVMe Controllers 00:15:47.481 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:47.481 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:47.481 Initialization complete. Launching workers. 00:15:47.481 ======================================================== 00:15:47.481 Latency(us) 00:15:47.481 Device Information : IOPS MiB/s Average min max 00:15:47.481 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33190.85 129.65 3855.68 1194.16 7667.24 00:15:47.481 ======================================================== 00:15:47.481 Total : 33190.85 129.65 3855.68 1194.16 7667.24 00:15:47.481 00:15:47.481 [2024-11-25 13:14:44.726070] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.481 13:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:47.481 [2024-11-25 13:14:44.985217] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.742 Initializing NVMe Controllers 00:15:52.742 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:52.742 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:52.742 Initialization complete. Launching workers. 00:15:52.742 ======================================================== 00:15:52.742 Latency(us) 00:15:52.742 Device Information : IOPS MiB/s Average min max 00:15:52.742 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.66 62.60 7992.41 5980.55 15986.72 00:15:52.742 ======================================================== 00:15:52.742 Total : 16025.66 62.60 7992.41 5980.55 15986.72 00:15:52.742 00:15:52.742 [2024-11-25 13:14:50.026175] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.742 13:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:52.742 [2024-11-25 13:14:50.262387] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.005 [2024-11-25 13:14:55.344734] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.005 Initializing NVMe Controllers 00:15:58.005 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:58.005 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:58.005 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:58.005 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:58.005 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:58.005 Initialization complete. Launching workers. 00:15:58.005 Starting thread on core 2 00:15:58.005 Starting thread on core 3 00:15:58.005 Starting thread on core 1 00:15:58.005 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:58.263 [2024-11-25 13:14:55.677808] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.547 [2024-11-25 13:14:58.749619] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.547 Initializing NVMe Controllers 00:16:01.547 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.547 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.547 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:01.547 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:01.547 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:01.547 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:01.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:01.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:01.547 Initialization complete. Launching workers. 00:16:01.547 Starting thread on core 1 with urgent priority queue 00:16:01.547 Starting thread on core 2 with urgent priority queue 00:16:01.547 Starting thread on core 3 with urgent priority queue 00:16:01.547 Starting thread on core 0 with urgent priority queue 00:16:01.547 SPDK bdev Controller (SPDK1 ) core 0: 5346.33 IO/s 18.70 secs/100000 ios 00:16:01.547 SPDK bdev Controller (SPDK1 ) core 1: 4880.00 IO/s 20.49 secs/100000 ios 00:16:01.547 SPDK bdev Controller (SPDK1 ) core 2: 4895.67 IO/s 20.43 secs/100000 ios 00:16:01.547 SPDK bdev Controller (SPDK1 ) core 3: 5418.00 IO/s 18.46 secs/100000 ios 00:16:01.547 ======================================================== 00:16:01.547 00:16:01.547 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:01.547 [2024-11-25 13:14:59.078822] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:01.547 Initializing NVMe Controllers 00:16:01.547 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.547 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:01.547 Namespace ID: 1 size: 0GB 00:16:01.547 Initialization complete. 00:16:01.547 INFO: using host memory buffer for IO 00:16:01.547 Hello world! 00:16:01.547 [2024-11-25 13:14:59.112354] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:01.547 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:01.804 [2024-11-25 13:14:59.417837] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.178 Initializing NVMe Controllers 00:16:03.178 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.178 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:03.178 Initialization complete. Launching workers. 00:16:03.178 submit (in ns) avg, min, max = 8605.3, 3603.3, 4018007.8 00:16:03.178 complete (in ns) avg, min, max = 31251.7, 2064.4, 7991326.7 00:16:03.178 00:16:03.178 Submit histogram 00:16:03.178 ================ 00:16:03.178 Range in us Cumulative Count 00:16:03.178 3.603 - 3.627: 0.9696% ( 121) 00:16:03.178 3.627 - 3.650: 3.7740% ( 350) 00:16:03.178 3.650 - 3.674: 9.7676% ( 748) 00:16:03.178 3.674 - 3.698: 16.0417% ( 783) 00:16:03.178 3.698 - 3.721: 23.4135% ( 920) 00:16:03.178 3.721 - 3.745: 30.0321% ( 826) 00:16:03.178 3.745 - 3.769: 36.1939% ( 769) 00:16:03.178 3.769 - 3.793: 42.1394% ( 742) 00:16:03.178 3.793 - 3.816: 47.0353% ( 611) 00:16:03.178 3.816 - 3.840: 51.3221% ( 535) 00:16:03.178 3.840 - 3.864: 55.6891% ( 545) 00:16:03.178 3.864 - 3.887: 60.1442% ( 556) 00:16:03.178 3.887 - 3.911: 65.6170% ( 683) 00:16:03.178 3.911 - 3.935: 70.9856% ( 670) 00:16:03.178 3.935 - 3.959: 75.7853% ( 599) 00:16:03.178 3.959 - 3.982: 79.4872% ( 462) 00:16:03.178 3.982 - 4.006: 82.0513% ( 320) 00:16:03.178 4.006 - 4.030: 84.0304% ( 247) 00:16:03.178 4.030 - 4.053: 85.7692% ( 217) 00:16:03.178 4.053 - 4.077: 87.3478% ( 197) 00:16:03.178 4.077 - 4.101: 88.4615% ( 139) 00:16:03.178 4.101 - 4.124: 89.7596% ( 162) 00:16:03.178 4.124 - 4.148: 91.3301% ( 196) 00:16:03.178 4.148 - 4.172: 92.3638% ( 129) 00:16:03.178 4.172 - 4.196: 93.2051% ( 105) 00:16:03.178 4.196 - 4.219: 93.8141% ( 76) 00:16:03.178 4.219 - 4.243: 94.2468% ( 54) 00:16:03.178 4.243 - 4.267: 94.5353% ( 36) 00:16:03.178 4.267 - 4.290: 94.8237% ( 36) 00:16:03.178 4.290 - 4.314: 95.0481% ( 28) 00:16:03.178 4.314 - 4.338: 95.2644% ( 27) 00:16:03.178 4.338 - 4.361: 95.3766% ( 14) 00:16:03.178 4.361 - 4.385: 95.4567% ( 10) 00:16:03.178 4.385 - 4.409: 95.5369% ( 10) 00:16:03.178 4.409 - 4.433: 95.6170% ( 10) 00:16:03.178 4.433 - 4.456: 95.6971% ( 10) 00:16:03.178 4.456 - 4.480: 95.7772% ( 10) 00:16:03.178 4.480 - 4.504: 95.8413% ( 8) 00:16:03.178 4.504 - 4.527: 95.8494% ( 1) 00:16:03.178 4.527 - 4.551: 95.8974% ( 6) 00:16:03.178 4.551 - 4.575: 95.9535% ( 7) 00:16:03.178 4.575 - 4.599: 95.9856% ( 4) 00:16:03.178 4.599 - 4.622: 96.0016% ( 2) 00:16:03.178 4.622 - 4.646: 96.0176% ( 2) 00:16:03.178 4.646 - 4.670: 96.0497% ( 4) 00:16:03.178 4.670 - 4.693: 96.0737% ( 3) 00:16:03.178 4.693 - 4.717: 96.0897% ( 2) 00:16:03.178 4.717 - 4.741: 96.1058% ( 2) 00:16:03.178 4.741 - 4.764: 96.1538% ( 6) 00:16:03.178 4.764 - 4.788: 96.2179% ( 8) 00:16:03.178 4.788 - 4.812: 96.2260% ( 1) 00:16:03.178 4.812 - 4.836: 96.2740% ( 6) 00:16:03.178 4.836 - 4.859: 96.3301% ( 7) 00:16:03.178 4.859 - 4.883: 96.3782% ( 6) 00:16:03.178 4.883 - 4.907: 96.4343% ( 7) 00:16:03.178 4.907 - 4.930: 96.4583% ( 3) 00:16:03.178 4.930 - 4.954: 96.5144% ( 7) 00:16:03.178 4.954 - 4.978: 96.5785% ( 8) 00:16:03.178 4.978 - 5.001: 96.6186% ( 5) 00:16:03.178 5.001 - 5.025: 96.6907% ( 9) 00:16:03.178 5.025 - 5.049: 96.7548% ( 8) 00:16:03.178 5.049 - 5.073: 96.7949% ( 5) 00:16:03.178 5.073 - 5.096: 96.8349% ( 5) 00:16:03.178 5.096 - 5.120: 96.8670% ( 4) 00:16:03.178 5.120 - 5.144: 96.8750% ( 1) 00:16:03.178 5.144 - 5.167: 96.8990% ( 3) 00:16:03.178 5.167 - 5.191: 96.9631% ( 8) 00:16:03.178 5.191 - 5.215: 97.0112% ( 6) 00:16:03.178 5.215 - 5.239: 97.0753% ( 8) 00:16:03.178 5.239 - 5.262: 97.0994% ( 3) 00:16:03.178 5.262 - 5.286: 97.1074% ( 1) 00:16:03.178 5.286 - 5.310: 97.1474% ( 5) 00:16:03.178 5.310 - 5.333: 97.1635% ( 2) 00:16:03.178 5.333 - 5.357: 97.1875% ( 3) 00:16:03.178 5.357 - 5.381: 97.2276% ( 5) 00:16:03.178 5.381 - 5.404: 97.2356% ( 1) 00:16:03.178 5.404 - 5.428: 97.2436% ( 1) 00:16:03.178 5.428 - 5.452: 97.2516% ( 1) 00:16:03.178 5.452 - 5.476: 97.2596% ( 1) 00:16:03.178 5.476 - 5.499: 97.2756% ( 2) 00:16:03.178 5.499 - 5.523: 97.2997% ( 3) 00:16:03.178 5.523 - 5.547: 97.3317% ( 4) 00:16:03.178 5.547 - 5.570: 97.3397% ( 1) 00:16:03.178 5.594 - 5.618: 97.3478% ( 1) 00:16:03.178 5.618 - 5.641: 97.3558% ( 1) 00:16:03.178 5.665 - 5.689: 97.3798% ( 3) 00:16:03.178 5.689 - 5.713: 97.4038% ( 3) 00:16:03.178 5.713 - 5.736: 97.4119% ( 1) 00:16:03.178 5.736 - 5.760: 97.4279% ( 2) 00:16:03.178 5.760 - 5.784: 97.4439% ( 2) 00:16:03.178 5.784 - 5.807: 97.4599% ( 2) 00:16:03.178 5.831 - 5.855: 97.4760% ( 2) 00:16:03.178 5.855 - 5.879: 97.4920% ( 2) 00:16:03.178 5.879 - 5.902: 97.5160% ( 3) 00:16:03.178 5.926 - 5.950: 97.5401% ( 3) 00:16:03.178 5.973 - 5.997: 97.5481% ( 1) 00:16:03.178 5.997 - 6.021: 97.5641% ( 2) 00:16:03.178 6.044 - 6.068: 97.5721% ( 1) 00:16:03.178 6.068 - 6.116: 97.5801% ( 1) 00:16:03.178 6.116 - 6.163: 97.6042% ( 3) 00:16:03.178 6.258 - 6.305: 97.6122% ( 1) 00:16:03.178 6.305 - 6.353: 97.6282% ( 2) 00:16:03.178 6.353 - 6.400: 97.6362% ( 1) 00:16:03.178 6.400 - 6.447: 97.6442% ( 1) 00:16:03.178 6.495 - 6.542: 97.6522% ( 1) 00:16:03.178 6.590 - 6.637: 97.6603% ( 1) 00:16:03.178 6.637 - 6.684: 97.6683% ( 1) 00:16:03.178 6.684 - 6.732: 97.6763% ( 1) 00:16:03.178 6.921 - 6.969: 97.6843% ( 1) 00:16:03.178 6.969 - 7.016: 97.7083% ( 3) 00:16:03.178 7.016 - 7.064: 97.7163% ( 1) 00:16:03.178 7.111 - 7.159: 97.7244% ( 1) 00:16:03.178 7.206 - 7.253: 97.7484% ( 3) 00:16:03.178 7.301 - 7.348: 97.7644% ( 2) 00:16:03.178 7.348 - 7.396: 97.7724% ( 1) 00:16:03.178 7.396 - 7.443: 97.7804% ( 1) 00:16:03.178 7.443 - 7.490: 97.7885% ( 1) 00:16:03.178 7.490 - 7.538: 97.7965% ( 1) 00:16:03.179 7.585 - 7.633: 97.8045% ( 1) 00:16:03.179 7.633 - 7.680: 97.8125% ( 1) 00:16:03.179 7.727 - 7.775: 97.8285% ( 2) 00:16:03.179 8.012 - 8.059: 97.8526% ( 3) 00:16:03.179 8.107 - 8.154: 97.8686% ( 2) 00:16:03.179 8.249 - 8.296: 97.8846% ( 2) 00:16:03.179 8.344 - 8.391: 97.8926% ( 1) 00:16:03.179 8.439 - 8.486: 97.9087% ( 2) 00:16:03.179 8.486 - 8.533: 97.9407% ( 4) 00:16:03.179 8.676 - 8.723: 97.9567% ( 2) 00:16:03.179 8.818 - 8.865: 97.9647% ( 1) 00:16:03.179 8.865 - 8.913: 97.9728% ( 1) 00:16:03.179 8.913 - 8.960: 97.9888% ( 2) 00:16:03.179 8.960 - 9.007: 98.0048% ( 2) 00:16:03.179 9.007 - 9.055: 98.0208% ( 2) 00:16:03.179 9.102 - 9.150: 98.0369% ( 2) 00:16:03.179 9.150 - 9.197: 98.0449% ( 1) 00:16:03.179 9.244 - 9.292: 98.0529% ( 1) 00:16:03.179 9.292 - 9.339: 98.0689% ( 2) 00:16:03.179 9.434 - 9.481: 98.0769% ( 1) 00:16:03.179 9.576 - 9.624: 98.0849% ( 1) 00:16:03.179 9.624 - 9.671: 98.0929% ( 1) 00:16:03.179 9.766 - 9.813: 98.1170% ( 3) 00:16:03.179 9.861 - 9.908: 98.1250% ( 1) 00:16:03.179 9.908 - 9.956: 98.1330% ( 1) 00:16:03.179 10.003 - 10.050: 98.1410% ( 1) 00:16:03.179 10.098 - 10.145: 98.1490% ( 1) 00:16:03.179 10.145 - 10.193: 98.1571% ( 1) 00:16:03.179 10.193 - 10.240: 98.1651% ( 1) 00:16:03.179 10.335 - 10.382: 98.1811% ( 2) 00:16:03.179 10.382 - 10.430: 98.1891% ( 1) 00:16:03.179 10.430 - 10.477: 98.1971% ( 1) 00:16:03.179 10.524 - 10.572: 98.2051% ( 1) 00:16:03.179 10.761 - 10.809: 98.2131% ( 1) 00:16:03.179 10.809 - 10.856: 98.2212% ( 1) 00:16:03.179 10.856 - 10.904: 98.2372% ( 2) 00:16:03.179 10.904 - 10.951: 98.2452% ( 1) 00:16:03.179 10.951 - 10.999: 98.2532% ( 1) 00:16:03.179 10.999 - 11.046: 98.2692% ( 2) 00:16:03.179 11.046 - 11.093: 98.2772% ( 1) 00:16:03.179 11.141 - 11.188: 98.2853% ( 1) 00:16:03.179 11.188 - 11.236: 98.2933% ( 1) 00:16:03.179 11.236 - 11.283: 98.3013% ( 1) 00:16:03.179 11.425 - 11.473: 98.3173% ( 2) 00:16:03.179 11.615 - 11.662: 98.3253% ( 1) 00:16:03.179 11.662 - 11.710: 98.3333% ( 1) 00:16:03.179 11.804 - 11.852: 98.3413% ( 1) 00:16:03.179 11.947 - 11.994: 98.3494% ( 1) 00:16:03.179 11.994 - 12.041: 98.3574% ( 1) 00:16:03.179 12.041 - 12.089: 98.3654% ( 1) 00:16:03.179 12.136 - 12.231: 98.3734% ( 1) 00:16:03.179 12.231 - 12.326: 98.3814% ( 1) 00:16:03.179 12.326 - 12.421: 98.3894% ( 1) 00:16:03.179 12.421 - 12.516: 98.4135% ( 3) 00:16:03.179 12.516 - 12.610: 98.4295% ( 2) 00:16:03.179 12.610 - 12.705: 98.4455% ( 2) 00:16:03.179 12.705 - 12.800: 98.4615% ( 2) 00:16:03.179 12.800 - 12.895: 98.4696% ( 1) 00:16:03.179 12.990 - 13.084: 98.4936% ( 3) 00:16:03.179 13.179 - 13.274: 98.5016% ( 1) 00:16:03.179 13.274 - 13.369: 98.5176% ( 2) 00:16:03.179 13.369 - 13.464: 98.5417% ( 3) 00:16:03.179 13.748 - 13.843: 98.5577% ( 2) 00:16:03.179 13.843 - 13.938: 98.5737% ( 2) 00:16:03.179 13.938 - 14.033: 98.5978% ( 3) 00:16:03.179 14.033 - 14.127: 98.6218% ( 3) 00:16:03.179 14.222 - 14.317: 98.6378% ( 2) 00:16:03.179 14.507 - 14.601: 98.6538% ( 2) 00:16:03.179 14.696 - 14.791: 98.6699% ( 2) 00:16:03.179 14.791 - 14.886: 98.6779% ( 1) 00:16:03.179 15.076 - 15.170: 98.7019% ( 3) 00:16:03.179 15.455 - 15.550: 98.7099% ( 1) 00:16:03.179 15.550 - 15.644: 98.7260% ( 2) 00:16:03.179 16.972 - 17.067: 98.7420% ( 2) 00:16:03.179 17.067 - 17.161: 98.7580% ( 2) 00:16:03.179 17.161 - 17.256: 98.7740% ( 2) 00:16:03.179 17.351 - 17.446: 98.7981% ( 3) 00:16:03.179 17.446 - 17.541: 98.8301% ( 4) 00:16:03.179 17.541 - 17.636: 98.8462% ( 2) 00:16:03.179 17.636 - 17.730: 98.9022% ( 7) 00:16:03.179 17.730 - 17.825: 98.9663% ( 8) 00:16:03.179 17.825 - 17.920: 99.0144% ( 6) 00:16:03.179 17.920 - 18.015: 99.0946% ( 10) 00:16:03.179 18.015 - 18.110: 99.1506% ( 7) 00:16:03.179 18.110 - 18.204: 99.2067% ( 7) 00:16:03.179 18.204 - 18.299: 99.2628% ( 7) 00:16:03.179 18.299 - 18.394: 99.3349% ( 9) 00:16:03.179 18.394 - 18.489: 99.3750% ( 5) 00:16:03.179 18.489 - 18.584: 99.4712% ( 12) 00:16:03.179 18.584 - 18.679: 99.5353% ( 8) 00:16:03.179 18.679 - 18.773: 99.6234% ( 11) 00:16:03.179 18.773 - 18.868: 99.6474% ( 3) 00:16:03.179 18.868 - 18.963: 99.6795% ( 4) 00:16:03.179 18.963 - 19.058: 99.7276% ( 6) 00:16:03.179 19.058 - 19.153: 99.7516% ( 3) 00:16:03.179 19.153 - 19.247: 99.7756% ( 3) 00:16:03.179 19.342 - 19.437: 99.7917% ( 2) 00:16:03.179 21.902 - 21.997: 99.7997% ( 1) 00:16:03.179 22.566 - 22.661: 99.8157% ( 2) 00:16:03.179 22.756 - 22.850: 99.8237% ( 1) 00:16:03.179 22.850 - 22.945: 99.8317% ( 1) 00:16:03.179 22.945 - 23.040: 99.8397% ( 1) 00:16:03.179 23.419 - 23.514: 99.8478% ( 1) 00:16:03.179 23.799 - 23.893: 99.8558% ( 1) 00:16:03.179 24.652 - 24.841: 99.8638% ( 1) 00:16:03.179 24.841 - 25.031: 99.8718% ( 1) 00:16:03.179 26.359 - 26.548: 99.8798% ( 1) 00:16:03.179 27.496 - 27.686: 99.8878% ( 1) 00:16:03.179 3980.705 - 4004.978: 99.9840% ( 12) 00:16:03.179 4004.978 - 4029.250: 100.0000% ( 2) 00:16:03.179 00:16:03.179 Complete histogram 00:16:03.179 ================== 00:16:03.179 Range in us Cumulative Count 00:16:03.179 2.062 - 2.074: 5.4167% ( 676) 00:16:03.179 2.074 - 2.086: 30.7131% ( 3157) 00:16:03.179 2.086 - 2.098: 33.2853% ( 321) 00:16:03.179 2.098 - 2.110: 37.0112% ( 465) 00:16:03.179 2.110 - 2.121: 41.9792% ( 620) 00:16:03.179 2.121 - 2.133: 43.0449% ( 133) 00:16:03.179 2.133 - 2.145: 51.8910% ( 1104) 00:16:03.179 2.145 - 2.157: 63.1010% ( 1399) 00:16:03.179 2.157 - 2.169: 64.5433% ( 180) 00:16:03.179 2.169 - 2.181: 67.1554% ( 326) 00:16:03.179 2.181 - 2.193: 69.5272% ( 296) 00:16:03.179 2.193 - 2.204: 70.1362% ( 76) 00:16:03.179 2.204 - 2.216: 75.3045% ( 645) 00:16:03.179 2.216 - 2.228: 83.6458% ( 1041) 00:16:03.179 2.228 - 2.240: 85.6811% ( 254) 00:16:03.179 2.240 - 2.252: 88.3574% ( 334) 00:16:03.179 2.252 - 2.264: 90.4247% ( 258) 00:16:03.179 2.264 - 2.276: 90.9856% ( 70) 00:16:03.179 2.276 - 2.287: 91.7548% ( 96) 00:16:03.179 2.287 - 2.299: 92.1554% ( 50) 00:16:03.179 2.299 - 2.311: 93.0128% ( 107) 00:16:03.179 2.311 - 2.323: 94.0625% ( 131) 00:16:03.179 2.323 - 2.335: 94.3429% ( 35) 00:16:03.179 2.335 - 2.347: 94.3590% ( 2) 00:16:03.179 2.347 - 2.359: 94.4391% ( 10) 00:16:03.179 2.359 - 2.370: 94.5833% ( 18) 00:16:03.179 2.370 - 2.382: 94.7917% ( 26) 00:16:03.179 2.382 - 2.394: 95.1843% ( 49) 00:16:03.179 2.394 - 2.406: 95.5849% ( 50) 00:16:03.179 2.406 - 2.418: 95.7933% ( 26) 00:16:03.179 2.418 - 2.430: 96.0176% ( 28) 00:16:03.179 2.430 - 2.441: 96.1699% ( 19) 00:16:03.179 2.441 - 2.453: 96.3702% ( 25) 00:16:03.179 2.453 - 2.465: 96.4984% ( 16) 00:16:03.179 2.465 - 2.477: 96.6506% ( 19) 00:16:03.179 2.477 - 2.489: 96.8510% ( 25) 00:16:03.179 2.489 - 2.501: 97.0192% ( 21) 00:16:03.179 2.501 - 2.513: 97.1474% ( 16) 00:16:03.179 2.513 - 2.524: 97.2756% ( 16) 00:16:03.179 2.524 - 2.536: 97.3798% ( 13) 00:16:03.179 2.536 - 2.548: 97.4679% ( 11) 00:16:03.179 2.548 - 2.560: 97.5240% ( 7) 00:16:03.179 2.560 - 2.572: 97.5881% ( 8) 00:16:03.179 2.572 - 2.584: 97.6763% ( 11) 00:16:03.179 2.584 - 2.596: 97.7003% ( 3) 00:16:03.179 2.596 - 2.607: 97.7484% ( 6) 00:16:03.179 2.607 - 2.619: 97.7724% ( 3) 00:16:03.179 2.619 - 2.631: 97.7804% ( 1) 00:16:03.179 2.631 - 2.643: 97.7965% ( 2) 00:16:03.179 2.643 - 2.655: 97.8205% ( 3) 00:16:03.179 2.667 - 2.679: 97.8365% ( 2) 00:16:03.179 2.679 - 2.690: 97.8606% ( 3) 00:16:03.179 2.690 - 2.702: 97.8686% ( 1) 00:16:03.179 2.702 - 2.714: 97.8846% ( 2) 00:16:03.179 2.714 - 2.726: 97.8926% ( 1) 00:16:03.179 2.738 - 2.750: 97.9006% ( 1) 00:16:03.179 2.750 - 2.761: 97.9087% ( 1) 00:16:03.179 2.761 - 2.773: 97.9407% ( 4) 00:16:03.179 2.773 - 2.785: 97.9567% ( 2) 00:16:03.179 2.797 - 2.809: 97.9647% ( 1) 00:16:03.179 2.809 - 2.821: 97.9888% ( 3) 00:16:03.179 2.833 - 2.844: 97.9968% ( 1) 00:16:03.179 2.868 - 2.880: 98.0048% ( 1) 00:16:03.179 2.880 - 2.892: 98.0288% ( 3) 00:16:03.179 2.892 - 2.904: 98.0529% ( 3) 00:16:03.179 2.904 - 2.916: 98.0609% ( 1) 00:16:03.179 2.916 - 2.927: 98.0769% ( 2) 00:16:03.179 2.927 - 2.939: 98.0929% ( 2) 00:16:03.179 2.939 - 2.951: 98.1090% ( 2) 00:16:03.179 2.951 - 2.963: 98.1170% ( 1) 00:16:03.180 2.975 - 2.987: 98.1250% ( 1) 00:16:03.180 2.987 - 2.999: 98.1330% ( 1) 00:16:03.180 3.022 - 3.034: 98.1410% ( 1) 00:16:03.180 3.034 - 3.058: 98.1490% ( 1) 00:16:03.180 3.058 - 3.081: 98.1731% ( 3) 00:16:03.180 3.081 - 3.105: 98.1891% ( 2) 00:16:03.180 3.129 - 3.153: 98.2131% ( 3) 00:16:03.180 3.153 - 3.176: 98.2212% ( 1) 00:16:03.180 3.200 - 3.224: 98.2372% ( 2) 00:16:03.180 3.224 - 3.247: 98.2612% ( 3) 00:16:03.180 3.247 - 3.271: 98.2692% ( 1) 00:16:03.180 3.271 - 3.295: 98.2853% ( 2) 00:16:03.180 3.319 - 3.342: 98.3093% ( 3) 00:16:03.180 3.366 - 3.390: 98.3173% ( 1) 00:16:03.180 3.390 - 3.413: 98.3253% ( 1) 00:16:03.180 3.413 - 3.437: 98.3413% ( 2) 00:16:03.180 3.461 - 3.484: 98.3574% ( 2) 00:16:03.180 3.508 - 3.532: 98.3814% ( 3) 00:16:03.180 3.532 - 3.556: 98.4054% ( 3) 00:16:03.180 3.556 - 3.579: 98.4135% ( 1) 00:16:03.180 3.579 - 3.603: 98.4295% ( 2) 00:16:03.180 3.603 - 3.627: 98.4375% ( 1) 00:16:03.180 3.650 - 3.674: 98.4455% ( 1) 00:16:03.180 3.674 - 3.698: 98.4696% ( 3) 00:16:03.180 3.721 - 3.745: 98.4776% ( 1) 00:16:03.180 3.745 - 3.769: 98.4856% ( 1) 00:16:03.180 3.769 - 3.793: 98.4936% ( 1) 00:16:03.180 3.864 - 3.887: 98.5016% ( 1) 00:16:03.180 4.006 - 4.030: 98.5096% ( 1) 00:16:03.180 4.053 - 4.077: 98.5176% ( 1) 00:16:03.180 4.124 - 4.148: 98.5256% ( 1) 00:16:03.180 4.148 - 4.172: 98.5337% ( 1) 00:16:03.180 4.172 - 4.196: 98.5417% ( 1) 00:16:03.180 4.361 - 4.385: 98.5497% ( 1) 00:16:03.180 4.385 - 4.409: 98.5577% ( 1) 00:16:03.180 4.433 - 4.456: 98.5657% ( 1) 00:16:03.180 4.480 - 4.504: 98.5737% ( 1) 00:16:03.180 4.812 - 4.836: 98.5817% ( 1) 00:16:03.180 5.073 - 5.096: 98.5897% ( 1) 00:16:03.180 5.879 - 5.902: 98.5978% ( 1) 00:16:03.180 6.021 - 6.044: 98.6058%[2024-11-25 13:15:00.438132] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:03.180 ( 1) 00:16:03.180 6.163 - 6.210: 98.6138% ( 1) 00:16:03.180 6.353 - 6.400: 98.6218% ( 1) 00:16:03.180 6.590 - 6.637: 98.6298% ( 1) 00:16:03.180 7.111 - 7.159: 98.6378% ( 1) 00:16:03.180 7.159 - 7.206: 98.6458% ( 1) 00:16:03.180 7.253 - 7.301: 98.6538% ( 1) 00:16:03.180 7.633 - 7.680: 98.6619% ( 1) 00:16:03.180 8.059 - 8.107: 98.6779% ( 2) 00:16:03.180 8.201 - 8.249: 98.6859% ( 1) 00:16:03.180 8.439 - 8.486: 98.6939% ( 1) 00:16:03.180 8.486 - 8.533: 98.7019% ( 1) 00:16:03.180 8.628 - 8.676: 98.7099% ( 1) 00:16:03.180 8.770 - 8.818: 98.7179% ( 1) 00:16:03.180 9.150 - 9.197: 98.7260% ( 1) 00:16:03.180 9.387 - 9.434: 98.7340% ( 1) 00:16:03.180 10.145 - 10.193: 98.7420% ( 1) 00:16:03.180 10.240 - 10.287: 98.7500% ( 1) 00:16:03.180 10.667 - 10.714: 98.7580% ( 1) 00:16:03.180 11.378 - 11.425: 98.7660% ( 1) 00:16:03.180 12.990 - 13.084: 98.7740% ( 1) 00:16:03.180 13.369 - 13.464: 98.7821% ( 1) 00:16:03.180 15.076 - 15.170: 98.7901% ( 1) 00:16:03.180 15.360 - 15.455: 98.7981% ( 1) 00:16:03.180 15.455 - 15.550: 98.8061% ( 1) 00:16:03.180 15.550 - 15.644: 98.8141% ( 1) 00:16:03.180 15.739 - 15.834: 98.8301% ( 2) 00:16:03.180 15.834 - 15.929: 98.8462% ( 2) 00:16:03.180 15.929 - 16.024: 98.8702% ( 3) 00:16:03.180 16.024 - 16.119: 98.8942% ( 3) 00:16:03.180 16.119 - 16.213: 98.9183% ( 3) 00:16:03.180 16.213 - 16.308: 98.9503% ( 4) 00:16:03.180 16.308 - 16.403: 98.9984% ( 6) 00:16:03.180 16.403 - 16.498: 99.0304% ( 4) 00:16:03.180 16.498 - 16.593: 99.0785% ( 6) 00:16:03.180 16.593 - 16.687: 99.1026% ( 3) 00:16:03.180 16.687 - 16.782: 99.1186% ( 2) 00:16:03.180 16.782 - 16.877: 99.1426% ( 3) 00:16:03.180 16.877 - 16.972: 99.1667% ( 3) 00:16:03.180 16.972 - 17.067: 99.1747% ( 1) 00:16:03.180 17.067 - 17.161: 99.1987% ( 3) 00:16:03.180 17.161 - 17.256: 99.2147% ( 2) 00:16:03.180 17.351 - 17.446: 99.2308% ( 2) 00:16:03.180 17.446 - 17.541: 99.2468% ( 2) 00:16:03.180 18.394 - 18.489: 99.2548% ( 1) 00:16:03.180 18.679 - 18.773: 99.2628% ( 1) 00:16:03.180 19.058 - 19.153: 99.2708% ( 1) 00:16:03.180 19.437 - 19.532: 99.2788% ( 1) 00:16:03.180 22.092 - 22.187: 99.2869% ( 1) 00:16:03.180 22.661 - 22.756: 99.2949% ( 1) 00:16:03.180 3665.161 - 3689.434: 99.3029% ( 1) 00:16:03.180 3980.705 - 4004.978: 99.7917% ( 61) 00:16:03.180 4004.978 - 4029.250: 99.9760% ( 23) 00:16:03.180 6990.507 - 7039.052: 99.9840% ( 1) 00:16:03.180 7087.597 - 7136.142: 99.9920% ( 1) 00:16:03.180 7961.410 - 8009.956: 100.0000% ( 1) 00:16:03.180 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:03.180 [ 00:16:03.180 { 00:16:03.180 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:03.180 "subtype": "Discovery", 00:16:03.180 "listen_addresses": [], 00:16:03.180 "allow_any_host": true, 00:16:03.180 "hosts": [] 00:16:03.180 }, 00:16:03.180 { 00:16:03.180 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:03.180 "subtype": "NVMe", 00:16:03.180 "listen_addresses": [ 00:16:03.180 { 00:16:03.180 "trtype": "VFIOUSER", 00:16:03.180 "adrfam": "IPv4", 00:16:03.180 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:03.180 "trsvcid": "0" 00:16:03.180 } 00:16:03.180 ], 00:16:03.180 "allow_any_host": true, 00:16:03.180 "hosts": [], 00:16:03.180 "serial_number": "SPDK1", 00:16:03.180 "model_number": "SPDK bdev Controller", 00:16:03.180 "max_namespaces": 32, 00:16:03.180 "min_cntlid": 1, 00:16:03.180 "max_cntlid": 65519, 00:16:03.180 "namespaces": [ 00:16:03.180 { 00:16:03.180 "nsid": 1, 00:16:03.180 "bdev_name": "Malloc1", 00:16:03.180 "name": "Malloc1", 00:16:03.180 "nguid": "F858C66C246D48CAA9A5FA86131B80E8", 00:16:03.180 "uuid": "f858c66c-246d-48ca-a9a5-fa86131b80e8" 00:16:03.180 } 00:16:03.180 ] 00:16:03.180 }, 00:16:03.180 { 00:16:03.180 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:03.180 "subtype": "NVMe", 00:16:03.180 "listen_addresses": [ 00:16:03.180 { 00:16:03.180 "trtype": "VFIOUSER", 00:16:03.180 "adrfam": "IPv4", 00:16:03.180 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:03.180 "trsvcid": "0" 00:16:03.180 } 00:16:03.180 ], 00:16:03.180 "allow_any_host": true, 00:16:03.180 "hosts": [], 00:16:03.180 "serial_number": "SPDK2", 00:16:03.180 "model_number": "SPDK bdev Controller", 00:16:03.180 "max_namespaces": 32, 00:16:03.180 "min_cntlid": 1, 00:16:03.180 "max_cntlid": 65519, 00:16:03.180 "namespaces": [ 00:16:03.180 { 00:16:03.180 "nsid": 1, 00:16:03.180 "bdev_name": "Malloc2", 00:16:03.180 "name": "Malloc2", 00:16:03.180 "nguid": "DBC85AA12058485790701A0714F371D8", 00:16:03.180 "uuid": "dbc85aa1-2058-4857-9070-1a0714f371d8" 00:16:03.180 } 00:16:03.180 ] 00:16:03.180 } 00:16:03.180 ] 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3149922 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:03.180 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:03.438 [2024-11-25 13:15:00.952850] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:03.438 Malloc3 00:16:03.438 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:04.004 [2024-11-25 13:15:01.357951] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:04.004 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.004 Asynchronous Event Request test 00:16:04.004 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.004 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:04.004 Registering asynchronous event callbacks... 00:16:04.004 Starting namespace attribute notice tests for all controllers... 00:16:04.004 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:04.004 aer_cb - Changed Namespace 00:16:04.004 Cleaning up... 00:16:04.004 [ 00:16:04.004 { 00:16:04.004 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.004 "subtype": "Discovery", 00:16:04.004 "listen_addresses": [], 00:16:04.004 "allow_any_host": true, 00:16:04.004 "hosts": [] 00:16:04.004 }, 00:16:04.004 { 00:16:04.004 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.004 "subtype": "NVMe", 00:16:04.004 "listen_addresses": [ 00:16:04.004 { 00:16:04.004 "trtype": "VFIOUSER", 00:16:04.004 "adrfam": "IPv4", 00:16:04.004 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.004 "trsvcid": "0" 00:16:04.004 } 00:16:04.004 ], 00:16:04.004 "allow_any_host": true, 00:16:04.004 "hosts": [], 00:16:04.004 "serial_number": "SPDK1", 00:16:04.004 "model_number": "SPDK bdev Controller", 00:16:04.004 "max_namespaces": 32, 00:16:04.004 "min_cntlid": 1, 00:16:04.004 "max_cntlid": 65519, 00:16:04.004 "namespaces": [ 00:16:04.004 { 00:16:04.004 "nsid": 1, 00:16:04.004 "bdev_name": "Malloc1", 00:16:04.004 "name": "Malloc1", 00:16:04.004 "nguid": "F858C66C246D48CAA9A5FA86131B80E8", 00:16:04.004 "uuid": "f858c66c-246d-48ca-a9a5-fa86131b80e8" 00:16:04.004 }, 00:16:04.004 { 00:16:04.004 "nsid": 2, 00:16:04.004 "bdev_name": "Malloc3", 00:16:04.004 "name": "Malloc3", 00:16:04.004 "nguid": "3882A8EE1BBA4BAE94D7B3F9E6812F30", 00:16:04.004 "uuid": "3882a8ee-1bba-4bae-94d7-b3f9e6812f30" 00:16:04.004 } 00:16:04.004 ] 00:16:04.004 }, 00:16:04.004 { 00:16:04.004 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.004 "subtype": "NVMe", 00:16:04.004 "listen_addresses": [ 00:16:04.004 { 00:16:04.004 "trtype": "VFIOUSER", 00:16:04.004 "adrfam": "IPv4", 00:16:04.004 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.004 "trsvcid": "0" 00:16:04.004 } 00:16:04.004 ], 00:16:04.004 "allow_any_host": true, 00:16:04.004 "hosts": [], 00:16:04.004 "serial_number": "SPDK2", 00:16:04.004 "model_number": "SPDK bdev Controller", 00:16:04.004 "max_namespaces": 32, 00:16:04.004 "min_cntlid": 1, 00:16:04.004 "max_cntlid": 65519, 00:16:04.004 "namespaces": [ 00:16:04.004 { 00:16:04.004 "nsid": 1, 00:16:04.004 "bdev_name": "Malloc2", 00:16:04.004 "name": "Malloc2", 00:16:04.004 "nguid": "DBC85AA12058485790701A0714F371D8", 00:16:04.004 "uuid": "dbc85aa1-2058-4857-9070-1a0714f371d8" 00:16:04.004 } 00:16:04.004 ] 00:16:04.004 } 00:16:04.005 ] 00:16:04.005 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3149922 00:16:04.005 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:04.005 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:04.005 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:04.005 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:04.266 [2024-11-25 13:15:01.675020] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:16:04.266 [2024-11-25 13:15:01.675065] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150099 ] 00:16:04.266 [2024-11-25 13:15:01.726200] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:04.266 [2024-11-25 13:15:01.728560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:04.266 [2024-11-25 13:15:01.728593] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1d73163000 00:16:04.266 [2024-11-25 13:15:01.729558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.266 [2024-11-25 13:15:01.730567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.266 [2024-11-25 13:15:01.733314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.266 [2024-11-25 13:15:01.733574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.266 [2024-11-25 13:15:01.734597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.266 [2024-11-25 13:15:01.735596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.266 [2024-11-25 13:15:01.736612] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:04.266 [2024-11-25 13:15:01.737625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:04.266 [2024-11-25 13:15:01.738660] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:04.266 [2024-11-25 13:15:01.738682] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1d73158000 00:16:04.266 [2024-11-25 13:15:01.739833] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.266 [2024-11-25 13:15:01.756782] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:04.266 [2024-11-25 13:15:01.756823] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:04.266 [2024-11-25 13:15:01.758935] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:04.266 [2024-11-25 13:15:01.758987] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:04.266 [2024-11-25 13:15:01.759075] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:04.266 [2024-11-25 13:15:01.759097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:04.266 [2024-11-25 13:15:01.759108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:04.266 [2024-11-25 13:15:01.759941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:04.266 [2024-11-25 13:15:01.759963] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:04.266 [2024-11-25 13:15:01.759976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:04.266 [2024-11-25 13:15:01.760948] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:04.266 [2024-11-25 13:15:01.760969] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:04.266 [2024-11-25 13:15:01.760984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:04.266 [2024-11-25 13:15:01.761954] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:04.266 [2024-11-25 13:15:01.761975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:04.266 [2024-11-25 13:15:01.762968] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:04.266 [2024-11-25 13:15:01.762988] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:04.266 [2024-11-25 13:15:01.762997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:04.266 [2024-11-25 13:15:01.763009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:04.266 [2024-11-25 13:15:01.763119] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:04.266 [2024-11-25 13:15:01.763127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:04.266 [2024-11-25 13:15:01.763135] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:04.266 [2024-11-25 13:15:01.763973] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:04.266 [2024-11-25 13:15:01.764982] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:04.266 [2024-11-25 13:15:01.765986] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:04.266 [2024-11-25 13:15:01.766979] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.266 [2024-11-25 13:15:01.767047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:04.266 [2024-11-25 13:15:01.767991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:04.266 [2024-11-25 13:15:01.768012] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:04.266 [2024-11-25 13:15:01.768021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:04.266 [2024-11-25 13:15:01.768045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:04.266 [2024-11-25 13:15:01.768067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:04.266 [2024-11-25 13:15:01.768091] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.266 [2024-11-25 13:15:01.768101] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.266 [2024-11-25 13:15:01.768108] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.266 [2024-11-25 13:15:01.768125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.266 [2024-11-25 13:15:01.774316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:04.266 [2024-11-25 13:15:01.774340] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:04.266 [2024-11-25 13:15:01.774360] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:04.266 [2024-11-25 13:15:01.774369] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:04.267 [2024-11-25 13:15:01.774377] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:04.267 [2024-11-25 13:15:01.774385] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:04.267 [2024-11-25 13:15:01.774393] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:04.267 [2024-11-25 13:15:01.774402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.774415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.774431] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.782320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.782355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.267 [2024-11-25 13:15:01.782369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.267 [2024-11-25 13:15:01.782381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.267 [2024-11-25 13:15:01.782394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:04.267 [2024-11-25 13:15:01.782403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.782420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.782435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.790320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.790351] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:04.267 [2024-11-25 13:15:01.790361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.790378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.790389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.790403] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.798315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.798391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.798409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.798423] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:04.267 [2024-11-25 13:15:01.798432] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:04.267 [2024-11-25 13:15:01.798439] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.267 [2024-11-25 13:15:01.798449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.806314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.806354] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:04.267 [2024-11-25 13:15:01.806381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.806398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.806412] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.267 [2024-11-25 13:15:01.806421] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.267 [2024-11-25 13:15:01.806427] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.267 [2024-11-25 13:15:01.806437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.814317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.814347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.814365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.814379] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:04.267 [2024-11-25 13:15:01.814388] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.267 [2024-11-25 13:15:01.814394] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.267 [2024-11-25 13:15:01.814404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.822318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.822345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.822359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.822375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.822386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.822395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.822403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.822412] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:04.267 [2024-11-25 13:15:01.822419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:04.267 [2024-11-25 13:15:01.822428] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:04.267 [2024-11-25 13:15:01.822451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.830330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.830356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.838328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.838354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.846326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.846353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.854331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.854364] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:04.267 [2024-11-25 13:15:01.854377] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:04.267 [2024-11-25 13:15:01.854383] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:04.267 [2024-11-25 13:15:01.854389] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:04.267 [2024-11-25 13:15:01.854395] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:04.267 [2024-11-25 13:15:01.854405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:04.267 [2024-11-25 13:15:01.854418] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:04.267 [2024-11-25 13:15:01.854426] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:04.267 [2024-11-25 13:15:01.854432] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.267 [2024-11-25 13:15:01.854445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.854459] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:04.267 [2024-11-25 13:15:01.854467] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:04.267 [2024-11-25 13:15:01.854473] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.267 [2024-11-25 13:15:01.854482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.854495] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:04.267 [2024-11-25 13:15:01.854504] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:04.267 [2024-11-25 13:15:01.854510] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:04.267 [2024-11-25 13:15:01.854518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:04.267 [2024-11-25 13:15:01.862319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.862348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.862367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:04.267 [2024-11-25 13:15:01.862380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:04.268 ===================================================== 00:16:04.268 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:04.268 ===================================================== 00:16:04.268 Controller Capabilities/Features 00:16:04.268 ================================ 00:16:04.268 Vendor ID: 4e58 00:16:04.268 Subsystem Vendor ID: 4e58 00:16:04.268 Serial Number: SPDK2 00:16:04.268 Model Number: SPDK bdev Controller 00:16:04.268 Firmware Version: 25.01 00:16:04.268 Recommended Arb Burst: 6 00:16:04.268 IEEE OUI Identifier: 8d 6b 50 00:16:04.268 Multi-path I/O 00:16:04.268 May have multiple subsystem ports: Yes 00:16:04.268 May have multiple controllers: Yes 00:16:04.268 Associated with SR-IOV VF: No 00:16:04.268 Max Data Transfer Size: 131072 00:16:04.268 Max Number of Namespaces: 32 00:16:04.268 Max Number of I/O Queues: 127 00:16:04.268 NVMe Specification Version (VS): 1.3 00:16:04.268 NVMe Specification Version (Identify): 1.3 00:16:04.268 Maximum Queue Entries: 256 00:16:04.268 Contiguous Queues Required: Yes 00:16:04.268 Arbitration Mechanisms Supported 00:16:04.268 Weighted Round Robin: Not Supported 00:16:04.268 Vendor Specific: Not Supported 00:16:04.268 Reset Timeout: 15000 ms 00:16:04.268 Doorbell Stride: 4 bytes 00:16:04.268 NVM Subsystem Reset: Not Supported 00:16:04.268 Command Sets Supported 00:16:04.268 NVM Command Set: Supported 00:16:04.268 Boot Partition: Not Supported 00:16:04.268 Memory Page Size Minimum: 4096 bytes 00:16:04.268 Memory Page Size Maximum: 4096 bytes 00:16:04.268 Persistent Memory Region: Not Supported 00:16:04.268 Optional Asynchronous Events Supported 00:16:04.268 Namespace Attribute Notices: Supported 00:16:04.268 Firmware Activation Notices: Not Supported 00:16:04.268 ANA Change Notices: Not Supported 00:16:04.268 PLE Aggregate Log Change Notices: Not Supported 00:16:04.268 LBA Status Info Alert Notices: Not Supported 00:16:04.268 EGE Aggregate Log Change Notices: Not Supported 00:16:04.268 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.268 Zone Descriptor Change Notices: Not Supported 00:16:04.268 Discovery Log Change Notices: Not Supported 00:16:04.268 Controller Attributes 00:16:04.268 128-bit Host Identifier: Supported 00:16:04.268 Non-Operational Permissive Mode: Not Supported 00:16:04.268 NVM Sets: Not Supported 00:16:04.268 Read Recovery Levels: Not Supported 00:16:04.268 Endurance Groups: Not Supported 00:16:04.268 Predictable Latency Mode: Not Supported 00:16:04.268 Traffic Based Keep ALive: Not Supported 00:16:04.268 Namespace Granularity: Not Supported 00:16:04.268 SQ Associations: Not Supported 00:16:04.268 UUID List: Not Supported 00:16:04.268 Multi-Domain Subsystem: Not Supported 00:16:04.268 Fixed Capacity Management: Not Supported 00:16:04.268 Variable Capacity Management: Not Supported 00:16:04.268 Delete Endurance Group: Not Supported 00:16:04.268 Delete NVM Set: Not Supported 00:16:04.268 Extended LBA Formats Supported: Not Supported 00:16:04.268 Flexible Data Placement Supported: Not Supported 00:16:04.268 00:16:04.268 Controller Memory Buffer Support 00:16:04.268 ================================ 00:16:04.268 Supported: No 00:16:04.268 00:16:04.268 Persistent Memory Region Support 00:16:04.268 ================================ 00:16:04.268 Supported: No 00:16:04.268 00:16:04.268 Admin Command Set Attributes 00:16:04.268 ============================ 00:16:04.268 Security Send/Receive: Not Supported 00:16:04.268 Format NVM: Not Supported 00:16:04.268 Firmware Activate/Download: Not Supported 00:16:04.268 Namespace Management: Not Supported 00:16:04.268 Device Self-Test: Not Supported 00:16:04.268 Directives: Not Supported 00:16:04.268 NVMe-MI: Not Supported 00:16:04.268 Virtualization Management: Not Supported 00:16:04.268 Doorbell Buffer Config: Not Supported 00:16:04.268 Get LBA Status Capability: Not Supported 00:16:04.268 Command & Feature Lockdown Capability: Not Supported 00:16:04.268 Abort Command Limit: 4 00:16:04.268 Async Event Request Limit: 4 00:16:04.268 Number of Firmware Slots: N/A 00:16:04.268 Firmware Slot 1 Read-Only: N/A 00:16:04.268 Firmware Activation Without Reset: N/A 00:16:04.268 Multiple Update Detection Support: N/A 00:16:04.268 Firmware Update Granularity: No Information Provided 00:16:04.268 Per-Namespace SMART Log: No 00:16:04.268 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.268 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:04.268 Command Effects Log Page: Supported 00:16:04.268 Get Log Page Extended Data: Supported 00:16:04.268 Telemetry Log Pages: Not Supported 00:16:04.268 Persistent Event Log Pages: Not Supported 00:16:04.268 Supported Log Pages Log Page: May Support 00:16:04.268 Commands Supported & Effects Log Page: Not Supported 00:16:04.268 Feature Identifiers & Effects Log Page:May Support 00:16:04.268 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.268 Data Area 4 for Telemetry Log: Not Supported 00:16:04.268 Error Log Page Entries Supported: 128 00:16:04.268 Keep Alive: Supported 00:16:04.268 Keep Alive Granularity: 10000 ms 00:16:04.268 00:16:04.268 NVM Command Set Attributes 00:16:04.268 ========================== 00:16:04.268 Submission Queue Entry Size 00:16:04.268 Max: 64 00:16:04.268 Min: 64 00:16:04.268 Completion Queue Entry Size 00:16:04.268 Max: 16 00:16:04.268 Min: 16 00:16:04.268 Number of Namespaces: 32 00:16:04.268 Compare Command: Supported 00:16:04.268 Write Uncorrectable Command: Not Supported 00:16:04.268 Dataset Management Command: Supported 00:16:04.268 Write Zeroes Command: Supported 00:16:04.268 Set Features Save Field: Not Supported 00:16:04.268 Reservations: Not Supported 00:16:04.268 Timestamp: Not Supported 00:16:04.268 Copy: Supported 00:16:04.268 Volatile Write Cache: Present 00:16:04.268 Atomic Write Unit (Normal): 1 00:16:04.268 Atomic Write Unit (PFail): 1 00:16:04.268 Atomic Compare & Write Unit: 1 00:16:04.268 Fused Compare & Write: Supported 00:16:04.268 Scatter-Gather List 00:16:04.268 SGL Command Set: Supported (Dword aligned) 00:16:04.268 SGL Keyed: Not Supported 00:16:04.268 SGL Bit Bucket Descriptor: Not Supported 00:16:04.268 SGL Metadata Pointer: Not Supported 00:16:04.268 Oversized SGL: Not Supported 00:16:04.268 SGL Metadata Address: Not Supported 00:16:04.268 SGL Offset: Not Supported 00:16:04.268 Transport SGL Data Block: Not Supported 00:16:04.268 Replay Protected Memory Block: Not Supported 00:16:04.268 00:16:04.268 Firmware Slot Information 00:16:04.268 ========================= 00:16:04.268 Active slot: 1 00:16:04.268 Slot 1 Firmware Revision: 25.01 00:16:04.268 00:16:04.268 00:16:04.268 Commands Supported and Effects 00:16:04.268 ============================== 00:16:04.268 Admin Commands 00:16:04.268 -------------- 00:16:04.268 Get Log Page (02h): Supported 00:16:04.268 Identify (06h): Supported 00:16:04.268 Abort (08h): Supported 00:16:04.268 Set Features (09h): Supported 00:16:04.268 Get Features (0Ah): Supported 00:16:04.268 Asynchronous Event Request (0Ch): Supported 00:16:04.268 Keep Alive (18h): Supported 00:16:04.268 I/O Commands 00:16:04.268 ------------ 00:16:04.268 Flush (00h): Supported LBA-Change 00:16:04.268 Write (01h): Supported LBA-Change 00:16:04.268 Read (02h): Supported 00:16:04.268 Compare (05h): Supported 00:16:04.268 Write Zeroes (08h): Supported LBA-Change 00:16:04.268 Dataset Management (09h): Supported LBA-Change 00:16:04.268 Copy (19h): Supported LBA-Change 00:16:04.268 00:16:04.268 Error Log 00:16:04.268 ========= 00:16:04.268 00:16:04.268 Arbitration 00:16:04.268 =========== 00:16:04.268 Arbitration Burst: 1 00:16:04.268 00:16:04.268 Power Management 00:16:04.268 ================ 00:16:04.268 Number of Power States: 1 00:16:04.268 Current Power State: Power State #0 00:16:04.268 Power State #0: 00:16:04.268 Max Power: 0.00 W 00:16:04.268 Non-Operational State: Operational 00:16:04.268 Entry Latency: Not Reported 00:16:04.268 Exit Latency: Not Reported 00:16:04.268 Relative Read Throughput: 0 00:16:04.268 Relative Read Latency: 0 00:16:04.268 Relative Write Throughput: 0 00:16:04.268 Relative Write Latency: 0 00:16:04.268 Idle Power: Not Reported 00:16:04.268 Active Power: Not Reported 00:16:04.268 Non-Operational Permissive Mode: Not Supported 00:16:04.268 00:16:04.268 Health Information 00:16:04.268 ================== 00:16:04.268 Critical Warnings: 00:16:04.268 Available Spare Space: OK 00:16:04.268 Temperature: OK 00:16:04.268 Device Reliability: OK 00:16:04.268 Read Only: No 00:16:04.268 Volatile Memory Backup: OK 00:16:04.268 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:04.268 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:04.268 Available Spare: 0% 00:16:04.269 Available Sp[2024-11-25 13:15:01.862498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:04.269 [2024-11-25 13:15:01.870317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:04.269 [2024-11-25 13:15:01.870367] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:04.269 [2024-11-25 13:15:01.870386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.269 [2024-11-25 13:15:01.870398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.269 [2024-11-25 13:15:01.870408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.269 [2024-11-25 13:15:01.870418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:04.269 [2024-11-25 13:15:01.870515] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:04.269 [2024-11-25 13:15:01.870537] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:04.269 [2024-11-25 13:15:01.871515] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.269 [2024-11-25 13:15:01.871601] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:04.269 [2024-11-25 13:15:01.871627] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:04.269 [2024-11-25 13:15:01.872526] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:04.269 [2024-11-25 13:15:01.872551] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:04.269 [2024-11-25 13:15:01.872624] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:04.269 [2024-11-25 13:15:01.875316] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:04.269 are Threshold: 0% 00:16:04.269 Life Percentage Used: 0% 00:16:04.269 Data Units Read: 0 00:16:04.269 Data Units Written: 0 00:16:04.269 Host Read Commands: 0 00:16:04.269 Host Write Commands: 0 00:16:04.269 Controller Busy Time: 0 minutes 00:16:04.269 Power Cycles: 0 00:16:04.269 Power On Hours: 0 hours 00:16:04.269 Unsafe Shutdowns: 0 00:16:04.269 Unrecoverable Media Errors: 0 00:16:04.269 Lifetime Error Log Entries: 0 00:16:04.269 Warning Temperature Time: 0 minutes 00:16:04.269 Critical Temperature Time: 0 minutes 00:16:04.269 00:16:04.269 Number of Queues 00:16:04.269 ================ 00:16:04.269 Number of I/O Submission Queues: 127 00:16:04.269 Number of I/O Completion Queues: 127 00:16:04.269 00:16:04.269 Active Namespaces 00:16:04.269 ================= 00:16:04.269 Namespace ID:1 00:16:04.269 Error Recovery Timeout: Unlimited 00:16:04.269 Command Set Identifier: NVM (00h) 00:16:04.269 Deallocate: Supported 00:16:04.269 Deallocated/Unwritten Error: Not Supported 00:16:04.269 Deallocated Read Value: Unknown 00:16:04.269 Deallocate in Write Zeroes: Not Supported 00:16:04.269 Deallocated Guard Field: 0xFFFF 00:16:04.269 Flush: Supported 00:16:04.269 Reservation: Supported 00:16:04.269 Namespace Sharing Capabilities: Multiple Controllers 00:16:04.269 Size (in LBAs): 131072 (0GiB) 00:16:04.269 Capacity (in LBAs): 131072 (0GiB) 00:16:04.269 Utilization (in LBAs): 131072 (0GiB) 00:16:04.269 NGUID: DBC85AA12058485790701A0714F371D8 00:16:04.269 UUID: dbc85aa1-2058-4857-9070-1a0714f371d8 00:16:04.269 Thin Provisioning: Not Supported 00:16:04.269 Per-NS Atomic Units: Yes 00:16:04.269 Atomic Boundary Size (Normal): 0 00:16:04.269 Atomic Boundary Size (PFail): 0 00:16:04.269 Atomic Boundary Offset: 0 00:16:04.269 Maximum Single Source Range Length: 65535 00:16:04.269 Maximum Copy Length: 65535 00:16:04.269 Maximum Source Range Count: 1 00:16:04.269 NGUID/EUI64 Never Reused: No 00:16:04.269 Namespace Write Protected: No 00:16:04.269 Number of LBA Formats: 1 00:16:04.269 Current LBA Format: LBA Format #00 00:16:04.269 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.269 00:16:04.269 13:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:04.527 [2024-11-25 13:15:02.124474] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:09.789 Initializing NVMe Controllers 00:16:09.789 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:09.789 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:09.789 Initialization complete. Launching workers. 00:16:09.789 ======================================================== 00:16:09.789 Latency(us) 00:16:09.789 Device Information : IOPS MiB/s Average min max 00:16:09.789 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32163.28 125.64 3978.75 1226.37 7407.04 00:16:09.789 ======================================================== 00:16:09.789 Total : 32163.28 125.64 3978.75 1226.37 7407.04 00:16:09.789 00:16:09.789 [2024-11-25 13:15:07.233682] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:09.789 13:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:10.046 [2024-11-25 13:15:07.497415] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:15.309 Initializing NVMe Controllers 00:16:15.309 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:15.309 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:15.309 Initialization complete. Launching workers. 00:16:15.309 ======================================================== 00:16:15.309 Latency(us) 00:16:15.309 Device Information : IOPS MiB/s Average min max 00:16:15.309 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29982.00 117.12 4269.04 1250.12 10378.61 00:16:15.309 ======================================================== 00:16:15.309 Total : 29982.00 117.12 4269.04 1250.12 10378.61 00:16:15.309 00:16:15.309 [2024-11-25 13:15:12.516252] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.309 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:15.309 [2024-11-25 13:15:12.752081] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.568 [2024-11-25 13:15:17.886453] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.568 Initializing NVMe Controllers 00:16:20.568 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.568 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:20.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:20.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:20.568 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:20.568 Initialization complete. Launching workers. 00:16:20.568 Starting thread on core 2 00:16:20.568 Starting thread on core 3 00:16:20.568 Starting thread on core 1 00:16:20.568 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:20.568 [2024-11-25 13:15:18.226911] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:23.848 [2024-11-25 13:15:21.302590] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:23.848 Initializing NVMe Controllers 00:16:23.848 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.848 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:23.848 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:23.848 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:23.848 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:23.848 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:23.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:23.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:23.848 Initialization complete. Launching workers. 00:16:23.848 Starting thread on core 1 with urgent priority queue 00:16:23.848 Starting thread on core 2 with urgent priority queue 00:16:23.848 Starting thread on core 3 with urgent priority queue 00:16:23.848 Starting thread on core 0 with urgent priority queue 00:16:23.848 SPDK bdev Controller (SPDK2 ) core 0: 5225.00 IO/s 19.14 secs/100000 ios 00:16:23.848 SPDK bdev Controller (SPDK2 ) core 1: 4902.00 IO/s 20.40 secs/100000 ios 00:16:23.848 SPDK bdev Controller (SPDK2 ) core 2: 5209.67 IO/s 19.20 secs/100000 ios 00:16:23.848 SPDK bdev Controller (SPDK2 ) core 3: 3339.67 IO/s 29.94 secs/100000 ios 00:16:23.848 ======================================================== 00:16:23.848 00:16:23.848 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:24.106 [2024-11-25 13:15:21.637823] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:24.106 Initializing NVMe Controllers 00:16:24.106 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.106 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:24.106 Namespace ID: 1 size: 0GB 00:16:24.106 Initialization complete. 00:16:24.106 INFO: using host memory buffer for IO 00:16:24.106 Hello world! 00:16:24.106 [2024-11-25 13:15:21.647882] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:24.106 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:24.403 [2024-11-25 13:15:21.952139] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:25.803 Initializing NVMe Controllers 00:16:25.803 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.803 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:25.803 Initialization complete. Launching workers. 00:16:25.803 submit (in ns) avg, min, max = 7145.0, 3536.7, 4015861.1 00:16:25.803 complete (in ns) avg, min, max = 25988.6, 2068.9, 4024111.1 00:16:25.803 00:16:25.803 Submit histogram 00:16:25.803 ================ 00:16:25.803 Range in us Cumulative Count 00:16:25.803 3.532 - 3.556: 0.0317% ( 4) 00:16:25.803 3.556 - 3.579: 1.3305% ( 164) 00:16:25.803 3.579 - 3.603: 4.6171% ( 415) 00:16:25.803 3.603 - 3.627: 12.0456% ( 938) 00:16:25.803 3.627 - 3.650: 21.6203% ( 1209) 00:16:25.803 3.650 - 3.674: 31.8999% ( 1298) 00:16:25.803 3.674 - 3.698: 39.7402% ( 990) 00:16:25.803 3.698 - 3.721: 45.8541% ( 772) 00:16:25.803 3.721 - 3.745: 50.6692% ( 608) 00:16:25.804 3.745 - 3.769: 55.6585% ( 630) 00:16:25.804 3.769 - 3.793: 59.5074% ( 486) 00:16:25.804 3.793 - 3.816: 62.9207% ( 431) 00:16:25.804 3.816 - 3.840: 66.2469% ( 420) 00:16:25.804 3.840 - 3.864: 70.8482% ( 581) 00:16:25.804 3.864 - 3.887: 75.9721% ( 647) 00:16:25.804 3.887 - 3.911: 80.3437% ( 552) 00:16:25.804 3.911 - 3.935: 83.7966% ( 436) 00:16:25.804 3.935 - 3.959: 86.0299% ( 282) 00:16:25.804 3.959 - 3.982: 88.0257% ( 252) 00:16:25.804 3.982 - 4.006: 89.3403% ( 166) 00:16:25.804 4.006 - 4.030: 90.4886% ( 145) 00:16:25.804 4.030 - 4.053: 91.4311% ( 119) 00:16:25.804 4.053 - 4.077: 92.1913% ( 96) 00:16:25.804 4.077 - 4.101: 93.0466% ( 108) 00:16:25.804 4.101 - 4.124: 93.9257% ( 111) 00:16:25.804 4.124 - 4.148: 94.5593% ( 80) 00:16:25.804 4.148 - 4.172: 95.0186% ( 58) 00:16:25.804 4.172 - 4.196: 95.4621% ( 56) 00:16:25.804 4.196 - 4.219: 95.8026% ( 43) 00:16:25.804 4.219 - 4.243: 96.0323% ( 29) 00:16:25.804 4.243 - 4.267: 96.2461% ( 27) 00:16:25.804 4.267 - 4.290: 96.4045% ( 20) 00:16:25.804 4.290 - 4.314: 96.5075% ( 13) 00:16:25.804 4.314 - 4.338: 96.6659% ( 20) 00:16:25.804 4.338 - 4.361: 96.7688% ( 13) 00:16:25.804 4.361 - 4.385: 96.8797% ( 14) 00:16:25.804 4.385 - 4.409: 96.9351% ( 7) 00:16:25.804 4.409 - 4.433: 97.0143% ( 10) 00:16:25.804 4.433 - 4.456: 97.0619% ( 6) 00:16:25.804 4.456 - 4.480: 97.0777% ( 2) 00:16:25.804 4.480 - 4.504: 97.1173% ( 5) 00:16:25.804 4.504 - 4.527: 97.1410% ( 3) 00:16:25.804 4.527 - 4.551: 97.1648% ( 3) 00:16:25.804 4.551 - 4.575: 97.1727% ( 1) 00:16:25.804 4.575 - 4.599: 97.1886% ( 2) 00:16:25.804 4.599 - 4.622: 97.2123% ( 3) 00:16:25.804 4.622 - 4.646: 97.2282% ( 2) 00:16:25.804 4.646 - 4.670: 97.2361% ( 1) 00:16:25.804 4.693 - 4.717: 97.2598% ( 3) 00:16:25.804 4.717 - 4.741: 97.2757% ( 2) 00:16:25.804 4.741 - 4.764: 97.2915% ( 2) 00:16:25.804 4.764 - 4.788: 97.2994% ( 1) 00:16:25.804 4.788 - 4.812: 97.3390% ( 5) 00:16:25.804 4.812 - 4.836: 97.4024% ( 8) 00:16:25.804 4.836 - 4.859: 97.4895% ( 11) 00:16:25.804 4.859 - 4.883: 97.5212% ( 4) 00:16:25.804 4.883 - 4.907: 97.5608% ( 5) 00:16:25.804 4.907 - 4.930: 97.6241% ( 8) 00:16:25.804 4.930 - 4.954: 97.6558% ( 4) 00:16:25.804 4.954 - 4.978: 97.7192% ( 8) 00:16:25.804 4.978 - 5.001: 97.7509% ( 4) 00:16:25.804 5.001 - 5.025: 97.7984% ( 6) 00:16:25.804 5.025 - 5.049: 97.8696% ( 9) 00:16:25.804 5.049 - 5.073: 97.9172% ( 6) 00:16:25.804 5.073 - 5.096: 97.9251% ( 1) 00:16:25.804 5.096 - 5.120: 97.9488% ( 3) 00:16:25.804 5.120 - 5.144: 97.9805% ( 4) 00:16:25.804 5.144 - 5.167: 97.9964% ( 2) 00:16:25.804 5.167 - 5.191: 98.0201% ( 3) 00:16:25.804 5.191 - 5.215: 98.0518% ( 4) 00:16:25.804 5.215 - 5.239: 98.0676% ( 2) 00:16:25.804 5.239 - 5.262: 98.0835% ( 2) 00:16:25.804 5.262 - 5.286: 98.1152% ( 4) 00:16:25.804 5.286 - 5.310: 98.1310% ( 2) 00:16:25.804 5.310 - 5.333: 98.1389% ( 1) 00:16:25.804 5.357 - 5.381: 98.1468% ( 1) 00:16:25.804 5.499 - 5.523: 98.1547% ( 1) 00:16:25.804 5.523 - 5.547: 98.1627% ( 1) 00:16:25.804 5.689 - 5.713: 98.1706% ( 1) 00:16:25.804 5.713 - 5.736: 98.1785% ( 1) 00:16:25.804 5.736 - 5.760: 98.1864% ( 1) 00:16:25.804 5.760 - 5.784: 98.1943% ( 1) 00:16:25.804 5.807 - 5.831: 98.2102% ( 2) 00:16:25.804 5.831 - 5.855: 98.2260% ( 2) 00:16:25.804 5.879 - 5.902: 98.2339% ( 1) 00:16:25.804 5.902 - 5.926: 98.2498% ( 2) 00:16:25.804 5.950 - 5.973: 98.2577% ( 1) 00:16:25.804 6.021 - 6.044: 98.2656% ( 1) 00:16:25.804 6.068 - 6.116: 98.2815% ( 2) 00:16:25.804 6.210 - 6.258: 98.2973% ( 2) 00:16:25.804 6.258 - 6.305: 98.3052% ( 1) 00:16:25.804 6.305 - 6.353: 98.3131% ( 1) 00:16:25.804 6.400 - 6.447: 98.3211% ( 1) 00:16:25.804 6.637 - 6.684: 98.3290% ( 1) 00:16:25.804 6.969 - 7.016: 98.3369% ( 1) 00:16:25.804 7.206 - 7.253: 98.3448% ( 1) 00:16:25.804 7.253 - 7.301: 98.3527% ( 1) 00:16:25.804 7.443 - 7.490: 98.3686% ( 2) 00:16:25.804 7.490 - 7.538: 98.3765% ( 1) 00:16:25.804 7.727 - 7.775: 98.3844% ( 1) 00:16:25.804 7.870 - 7.917: 98.3923% ( 1) 00:16:25.804 7.917 - 7.964: 98.4161% ( 3) 00:16:25.804 8.107 - 8.154: 98.4319% ( 2) 00:16:25.804 8.249 - 8.296: 98.4399% ( 1) 00:16:25.804 8.296 - 8.344: 98.4478% ( 1) 00:16:25.804 8.344 - 8.391: 98.4636% ( 2) 00:16:25.804 8.533 - 8.581: 98.4953% ( 4) 00:16:25.804 8.676 - 8.723: 98.5032% ( 1) 00:16:25.804 8.723 - 8.770: 98.5349% ( 4) 00:16:25.804 8.865 - 8.913: 98.5507% ( 2) 00:16:25.804 8.913 - 8.960: 98.5586% ( 1) 00:16:25.804 8.960 - 9.007: 98.5745% ( 2) 00:16:25.804 9.007 - 9.055: 98.5903% ( 2) 00:16:25.804 9.055 - 9.102: 98.6062% ( 2) 00:16:25.804 9.102 - 9.150: 98.6141% ( 1) 00:16:25.804 9.197 - 9.244: 98.6220% ( 1) 00:16:25.804 9.292 - 9.339: 98.6299% ( 1) 00:16:25.804 9.339 - 9.387: 98.6458% ( 2) 00:16:25.804 9.387 - 9.434: 98.6537% ( 1) 00:16:25.804 9.434 - 9.481: 98.6616% ( 1) 00:16:25.804 9.576 - 9.624: 98.6695% ( 1) 00:16:25.804 9.624 - 9.671: 98.6774% ( 1) 00:16:25.804 9.719 - 9.766: 98.6854% ( 1) 00:16:25.804 9.861 - 9.908: 98.6933% ( 1) 00:16:25.804 10.050 - 10.098: 98.7012% ( 1) 00:16:25.804 10.098 - 10.145: 98.7091% ( 1) 00:16:25.804 10.145 - 10.193: 98.7250% ( 2) 00:16:25.804 10.193 - 10.240: 98.7329% ( 1) 00:16:25.804 10.287 - 10.335: 98.7408% ( 1) 00:16:25.804 10.430 - 10.477: 98.7487% ( 1) 00:16:25.804 10.714 - 10.761: 98.7566% ( 1) 00:16:25.804 10.761 - 10.809: 98.7725% ( 2) 00:16:25.804 10.809 - 10.856: 98.7804% ( 1) 00:16:25.804 10.856 - 10.904: 98.7883% ( 1) 00:16:25.804 10.904 - 10.951: 98.7962% ( 1) 00:16:25.804 11.046 - 11.093: 98.8041% ( 1) 00:16:25.804 11.188 - 11.236: 98.8121% ( 1) 00:16:25.804 11.330 - 11.378: 98.8200% ( 1) 00:16:25.804 11.994 - 12.041: 98.8279% ( 1) 00:16:25.804 12.516 - 12.610: 98.8358% ( 1) 00:16:25.804 12.990 - 13.084: 98.8437% ( 1) 00:16:25.804 13.084 - 13.179: 98.8517% ( 1) 00:16:25.804 13.179 - 13.274: 98.8596% ( 1) 00:16:25.804 13.938 - 14.033: 98.8675% ( 1) 00:16:25.804 14.127 - 14.222: 98.8754% ( 1) 00:16:25.804 14.412 - 14.507: 98.8913% ( 2) 00:16:25.804 14.601 - 14.696: 98.8992% ( 1) 00:16:25.804 14.886 - 14.981: 98.9071% ( 1) 00:16:25.804 17.256 - 17.351: 98.9388% ( 4) 00:16:25.804 17.351 - 17.446: 98.9942% ( 7) 00:16:25.804 17.446 - 17.541: 99.0338% ( 5) 00:16:25.804 17.541 - 17.636: 99.0497% ( 2) 00:16:25.804 17.636 - 17.730: 99.1130% ( 8) 00:16:25.804 17.730 - 17.825: 99.1526% ( 5) 00:16:25.804 17.825 - 17.920: 99.2476% ( 12) 00:16:25.804 17.920 - 18.015: 99.3031% ( 7) 00:16:25.804 18.015 - 18.110: 99.3348% ( 4) 00:16:25.804 18.110 - 18.204: 99.4060% ( 9) 00:16:25.804 18.204 - 18.299: 99.4456% ( 5) 00:16:25.804 18.299 - 18.394: 99.5169% ( 9) 00:16:25.804 18.394 - 18.489: 99.5882% ( 9) 00:16:25.804 18.489 - 18.584: 99.6674% ( 10) 00:16:25.804 18.584 - 18.679: 99.6911% ( 3) 00:16:25.804 18.679 - 18.773: 99.7466% ( 7) 00:16:25.804 18.773 - 18.868: 99.7624% ( 2) 00:16:25.804 18.868 - 18.963: 99.7783% ( 2) 00:16:25.804 18.963 - 19.058: 99.8179% ( 5) 00:16:25.804 19.058 - 19.153: 99.8416% ( 3) 00:16:25.804 19.153 - 19.247: 99.8495% ( 1) 00:16:25.804 19.721 - 19.816: 99.8574% ( 1) 00:16:25.804 21.144 - 21.239: 99.8654% ( 1) 00:16:25.804 22.661 - 22.756: 99.8733% ( 1) 00:16:25.804 23.419 - 23.514: 99.8812% ( 1) 00:16:25.804 23.514 - 23.609: 99.8891% ( 1) 00:16:25.804 24.273 - 24.462: 99.8970% ( 1) 00:16:25.804 26.548 - 26.738: 99.9050% ( 1) 00:16:25.804 28.255 - 28.444: 99.9129% ( 1) 00:16:25.804 28.824 - 29.013: 99.9208% ( 1) 00:16:25.804 3980.705 - 4004.978: 99.9842% ( 8) 00:16:25.804 4004.978 - 4029.250: 100.0000% ( 2) 00:16:25.804 00:16:25.804 Complete histogram 00:16:25.804 ================== 00:16:25.804 Range in us Cumulative Count 00:16:25.804 2.062 - 2.074: 1.0612% ( 134) 00:16:25.804 2.074 - 2.086: 30.5853% ( 3728) 00:16:25.804 2.086 - 2.098: 41.9894% ( 1440) 00:16:25.804 2.098 - 2.110: 44.6028% ( 330) 00:16:25.804 2.110 - 2.121: 52.8392% ( 1040) 00:16:25.805 2.121 - 2.133: 55.5635% ( 344) 00:16:25.805 2.133 - 2.145: 59.1669% ( 455) 00:16:25.805 2.145 - 2.157: 70.9591% ( 1489) 00:16:25.805 2.157 - 2.169: 73.6834% ( 344) 00:16:25.805 2.169 - 2.181: 75.2118% ( 193) 00:16:25.805 2.181 - 2.193: 78.1262% ( 368) 00:16:25.805 2.193 - 2.204: 79.1558% ( 130) 00:16:25.805 2.204 - 2.216: 80.4467% ( 163) 00:16:25.805 2.216 - 2.228: 86.1012% ( 714) 00:16:25.805 2.228 - 2.240: 88.1049% ( 253) 00:16:25.805 2.240 - 2.252: 90.3302% ( 281) 00:16:25.805 2.252 - 2.264: 92.3656% ( 257) 00:16:25.805 2.264 - 2.276: 93.1100% ( 94) 00:16:25.805 2.276 - 2.287: 93.4664% ( 45) 00:16:25.805 2.287 - 2.299: 93.9099% ( 56) 00:16:25.805 2.299 - 2.311: 94.2821% ( 47) 00:16:25.805 2.311 - 2.323: 95.0740% ( 100) 00:16:25.805 2.323 - 2.335: 95.3829% ( 39) 00:16:25.805 2.335 - 2.347: 95.4859% ( 13) 00:16:25.805 2.347 - 2.359: 95.5413% ( 7) 00:16:25.805 2.359 - 2.370: 95.6126% ( 9) 00:16:25.805 2.370 - 2.382: 95.6601% ( 6) 00:16:25.805 2.382 - 2.394: 95.8422% ( 23) 00:16:25.805 2.394 - 2.406: 96.0640% ( 28) 00:16:25.805 2.406 - 2.418: 96.2382% ( 22) 00:16:25.805 2.418 - 2.430: 96.4679% ( 29) 00:16:25.805 2.430 - 2.441: 96.6976% ( 29) 00:16:25.805 2.441 - 2.453: 96.7926% ( 12) 00:16:25.805 2.453 - 2.465: 96.9431% ( 19) 00:16:25.805 2.465 - 2.477: 97.1252% ( 23) 00:16:25.805 2.477 - 2.489: 97.3628% ( 30) 00:16:25.805 2.489 - 2.501: 97.5766% ( 27) 00:16:25.805 2.501 - 2.513: 97.7588% ( 23) 00:16:25.805 2.513 - 2.524: 97.9013% ( 18) 00:16:25.805 2.524 - 2.536: 98.0360% ( 17) 00:16:25.805 2.536 - 2.548: 98.1706% ( 17) 00:16:25.805 2.548 - 2.560: 98.2498% ( 10) 00:16:25.805 2.560 - 2.572: 98.3686% ( 15) 00:16:25.805 2.572 - 2.584: 98.4003% ( 4) 00:16:25.805 2.584 - 2.596: 98.4478% ( 6) 00:16:25.805 2.596 - 2.607: 98.4794% ( 4) 00:16:25.805 2.607 - 2.619: 98.5349% ( 7) 00:16:25.805 2.631 - 2.643: 98.5428% ( 1) 00:16:25.805 2.643 - 2.655: 98.5507% ( 1) 00:16:25.805 2.655 - 2.667: 98.5586% ( 1) 00:16:25.805 2.667 - 2.679: 98.5666% ( 1) 00:16:25.805 2.690 - 2.702: 98.5745% ( 1) 00:16:25.805 2.702 - 2.714: 98.5903% ( 2) 00:16:25.805 2.797 - 2.809: 98.5982% ( 1) 00:16:25.805 2.809 - 2.821: 98.6062% ( 1) 00:16:25.805 2.821 - 2.833: 98.6141% ( 1) 00:16:25.805 2.844 - 2.856: 98.6220% ( 1) 00:16:25.805 2.987 - 2.999: 98.6299% ( 1) 00:16:25.805 3.295 - 3.319: 98.6378% ( 1) 00:16:25.805 3.413 - 3.437: 98.6458% ( 1) 00:16:25.805 3.579 - 3.603: 98.6537% ( 1) 00:16:25.805 3.603 - 3.627: 98.6616% ( 1) 00:16:25.805 3.627 - 3.650: 98.6695% ( 1) 00:16:25.805 3.650 - 3.674: 98.6774% ( 1) 00:16:25.805 3.674 - 3.698: 98.6933% ( 2) 00:16:25.805 3.769 - 3.793: 98.7012% ( 1) 00:16:25.805 3.816 - 3.840: 98.7170% ( 2) 00:16:25.805 3.840 - 3.864: 98.7329% ( 2) 00:16:25.805 3.887 - 3.911: 98.7408% ( 1) 00:16:25.805 3.911 - 3.935: 98.7487% ( 1) 00:16:25.805 3.935 - 3.959: 98.7566% ( 1) 00:16:25.805 3.959 - 3.982: 9[2024-11-25 13:15:23.047109] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:25.805 8.7883% ( 4) 00:16:25.805 3.982 - 4.006: 98.7962% ( 1) 00:16:25.805 4.006 - 4.030: 98.8279% ( 4) 00:16:25.805 4.030 - 4.053: 98.8437% ( 2) 00:16:25.805 4.101 - 4.124: 98.8596% ( 2) 00:16:25.805 4.148 - 4.172: 98.8675% ( 1) 00:16:25.805 4.219 - 4.243: 98.8754% ( 1) 00:16:25.805 4.290 - 4.314: 98.8833% ( 1) 00:16:25.805 4.361 - 4.385: 98.8992% ( 2) 00:16:25.805 4.527 - 4.551: 98.9071% ( 1) 00:16:25.805 5.736 - 5.760: 98.9150% ( 1) 00:16:25.805 6.400 - 6.447: 98.9229% ( 1) 00:16:25.805 6.874 - 6.921: 98.9309% ( 1) 00:16:25.805 7.206 - 7.253: 98.9388% ( 1) 00:16:25.805 7.775 - 7.822: 98.9467% ( 1) 00:16:25.805 7.870 - 7.917: 98.9546% ( 1) 00:16:25.805 8.818 - 8.865: 98.9705% ( 2) 00:16:25.805 9.007 - 9.055: 98.9784% ( 1) 00:16:25.805 15.644 - 15.739: 98.9942% ( 2) 00:16:25.805 15.739 - 15.834: 99.0101% ( 2) 00:16:25.805 15.834 - 15.929: 99.0259% ( 2) 00:16:25.805 15.929 - 16.024: 99.0417% ( 2) 00:16:25.805 16.024 - 16.119: 99.0734% ( 4) 00:16:25.805 16.119 - 16.213: 99.0972% ( 3) 00:16:25.805 16.213 - 16.308: 99.1130% ( 2) 00:16:25.805 16.308 - 16.403: 99.1368% ( 3) 00:16:25.805 16.403 - 16.498: 99.1526% ( 2) 00:16:25.805 16.498 - 16.593: 99.1684% ( 2) 00:16:25.805 16.593 - 16.687: 99.1843% ( 2) 00:16:25.805 16.687 - 16.782: 99.2239% ( 5) 00:16:25.805 16.782 - 16.877: 99.2872% ( 8) 00:16:25.805 16.877 - 16.972: 99.3110% ( 3) 00:16:25.805 17.161 - 17.256: 99.3348% ( 3) 00:16:25.805 17.351 - 17.446: 99.3506% ( 2) 00:16:25.805 17.541 - 17.636: 99.3585% ( 1) 00:16:25.805 17.825 - 17.920: 99.3664% ( 1) 00:16:25.805 17.920 - 18.015: 99.3744% ( 1) 00:16:25.805 18.204 - 18.299: 99.3823% ( 1) 00:16:25.805 18.394 - 18.489: 99.3902% ( 1) 00:16:25.805 18.489 - 18.584: 99.3981% ( 1) 00:16:25.805 18.963 - 19.058: 99.4060% ( 1) 00:16:25.805 3980.705 - 4004.978: 99.8495% ( 56) 00:16:25.805 4004.978 - 4029.250: 100.0000% ( 19) 00:16:25.805 00:16:25.805 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:25.805 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:25.805 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:25.805 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:25.805 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:25.805 [ 00:16:25.805 { 00:16:25.805 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:25.805 "subtype": "Discovery", 00:16:25.805 "listen_addresses": [], 00:16:25.805 "allow_any_host": true, 00:16:25.805 "hosts": [] 00:16:25.805 }, 00:16:25.805 { 00:16:25.805 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:25.805 "subtype": "NVMe", 00:16:25.805 "listen_addresses": [ 00:16:25.805 { 00:16:25.805 "trtype": "VFIOUSER", 00:16:25.805 "adrfam": "IPv4", 00:16:25.805 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:25.805 "trsvcid": "0" 00:16:25.805 } 00:16:25.805 ], 00:16:25.805 "allow_any_host": true, 00:16:25.805 "hosts": [], 00:16:25.805 "serial_number": "SPDK1", 00:16:25.805 "model_number": "SPDK bdev Controller", 00:16:25.805 "max_namespaces": 32, 00:16:25.805 "min_cntlid": 1, 00:16:25.805 "max_cntlid": 65519, 00:16:25.805 "namespaces": [ 00:16:25.805 { 00:16:25.805 "nsid": 1, 00:16:25.805 "bdev_name": "Malloc1", 00:16:25.805 "name": "Malloc1", 00:16:25.805 "nguid": "F858C66C246D48CAA9A5FA86131B80E8", 00:16:25.805 "uuid": "f858c66c-246d-48ca-a9a5-fa86131b80e8" 00:16:25.805 }, 00:16:25.805 { 00:16:25.805 "nsid": 2, 00:16:25.805 "bdev_name": "Malloc3", 00:16:25.805 "name": "Malloc3", 00:16:25.805 "nguid": "3882A8EE1BBA4BAE94D7B3F9E6812F30", 00:16:25.805 "uuid": "3882a8ee-1bba-4bae-94d7-b3f9e6812f30" 00:16:25.805 } 00:16:25.805 ] 00:16:25.805 }, 00:16:25.805 { 00:16:25.805 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:25.805 "subtype": "NVMe", 00:16:25.805 "listen_addresses": [ 00:16:25.805 { 00:16:25.805 "trtype": "VFIOUSER", 00:16:25.805 "adrfam": "IPv4", 00:16:25.805 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:25.805 "trsvcid": "0" 00:16:25.805 } 00:16:25.805 ], 00:16:25.805 "allow_any_host": true, 00:16:25.805 "hosts": [], 00:16:25.805 "serial_number": "SPDK2", 00:16:25.805 "model_number": "SPDK bdev Controller", 00:16:25.805 "max_namespaces": 32, 00:16:25.805 "min_cntlid": 1, 00:16:25.805 "max_cntlid": 65519, 00:16:25.805 "namespaces": [ 00:16:25.805 { 00:16:25.805 "nsid": 1, 00:16:25.806 "bdev_name": "Malloc2", 00:16:25.806 "name": "Malloc2", 00:16:25.806 "nguid": "DBC85AA12058485790701A0714F371D8", 00:16:25.806 "uuid": "dbc85aa1-2058-4857-9070-1a0714f371d8" 00:16:25.806 } 00:16:25.806 ] 00:16:25.806 } 00:16:25.806 ] 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3153146 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:25.806 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:26.064 [2024-11-25 13:15:23.538860] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:26.064 Malloc4 00:16:26.064 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:26.321 [2024-11-25 13:15:23.933003] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:26.321 13:15:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:26.321 Asynchronous Event Request test 00:16:26.321 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.321 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:26.321 Registering asynchronous event callbacks... 00:16:26.321 Starting namespace attribute notice tests for all controllers... 00:16:26.321 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:26.321 aer_cb - Changed Namespace 00:16:26.321 Cleaning up... 00:16:26.579 [ 00:16:26.579 { 00:16:26.579 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:26.579 "subtype": "Discovery", 00:16:26.579 "listen_addresses": [], 00:16:26.579 "allow_any_host": true, 00:16:26.579 "hosts": [] 00:16:26.579 }, 00:16:26.579 { 00:16:26.579 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:26.579 "subtype": "NVMe", 00:16:26.579 "listen_addresses": [ 00:16:26.579 { 00:16:26.579 "trtype": "VFIOUSER", 00:16:26.579 "adrfam": "IPv4", 00:16:26.579 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:26.579 "trsvcid": "0" 00:16:26.579 } 00:16:26.579 ], 00:16:26.579 "allow_any_host": true, 00:16:26.579 "hosts": [], 00:16:26.579 "serial_number": "SPDK1", 00:16:26.579 "model_number": "SPDK bdev Controller", 00:16:26.579 "max_namespaces": 32, 00:16:26.579 "min_cntlid": 1, 00:16:26.579 "max_cntlid": 65519, 00:16:26.579 "namespaces": [ 00:16:26.579 { 00:16:26.579 "nsid": 1, 00:16:26.579 "bdev_name": "Malloc1", 00:16:26.579 "name": "Malloc1", 00:16:26.579 "nguid": "F858C66C246D48CAA9A5FA86131B80E8", 00:16:26.579 "uuid": "f858c66c-246d-48ca-a9a5-fa86131b80e8" 00:16:26.579 }, 00:16:26.579 { 00:16:26.579 "nsid": 2, 00:16:26.579 "bdev_name": "Malloc3", 00:16:26.579 "name": "Malloc3", 00:16:26.579 "nguid": "3882A8EE1BBA4BAE94D7B3F9E6812F30", 00:16:26.579 "uuid": "3882a8ee-1bba-4bae-94d7-b3f9e6812f30" 00:16:26.579 } 00:16:26.579 ] 00:16:26.579 }, 00:16:26.579 { 00:16:26.579 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:26.579 "subtype": "NVMe", 00:16:26.579 "listen_addresses": [ 00:16:26.579 { 00:16:26.579 "trtype": "VFIOUSER", 00:16:26.579 "adrfam": "IPv4", 00:16:26.579 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:26.579 "trsvcid": "0" 00:16:26.579 } 00:16:26.579 ], 00:16:26.579 "allow_any_host": true, 00:16:26.579 "hosts": [], 00:16:26.579 "serial_number": "SPDK2", 00:16:26.579 "model_number": "SPDK bdev Controller", 00:16:26.579 "max_namespaces": 32, 00:16:26.579 "min_cntlid": 1, 00:16:26.579 "max_cntlid": 65519, 00:16:26.579 "namespaces": [ 00:16:26.579 { 00:16:26.579 "nsid": 1, 00:16:26.579 "bdev_name": "Malloc2", 00:16:26.579 "name": "Malloc2", 00:16:26.579 "nguid": "DBC85AA12058485790701A0714F371D8", 00:16:26.579 "uuid": "dbc85aa1-2058-4857-9070-1a0714f371d8" 00:16:26.579 }, 00:16:26.579 { 00:16:26.579 "nsid": 2, 00:16:26.579 "bdev_name": "Malloc4", 00:16:26.579 "name": "Malloc4", 00:16:26.579 "nguid": "979AABD5BAD04BD68DB6149ADACBF79F", 00:16:26.579 "uuid": "979aabd5-bad0-4bd6-8db6-149adacbf79f" 00:16:26.579 } 00:16:26.579 ] 00:16:26.579 } 00:16:26.579 ] 00:16:26.579 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3153146 00:16:26.579 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:26.579 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3146902 00:16:26.579 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3146902 ']' 00:16:26.579 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3146902 00:16:26.579 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:26.579 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:26.579 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146902 00:16:26.837 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.837 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.837 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146902' 00:16:26.837 killing process with pid 3146902 00:16:26.837 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3146902 00:16:26.837 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3146902 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3153290 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3153290' 00:16:27.095 Process pid: 3153290 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3153290 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3153290 ']' 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.095 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:27.095 [2024-11-25 13:15:24.640641] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:27.095 [2024-11-25 13:15:24.641654] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:16:27.095 [2024-11-25 13:15:24.641719] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.095 [2024-11-25 13:15:24.710947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.354 [2024-11-25 13:15:24.770366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.354 [2024-11-25 13:15:24.770426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.354 [2024-11-25 13:15:24.770454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.354 [2024-11-25 13:15:24.770466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.354 [2024-11-25 13:15:24.770476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.354 [2024-11-25 13:15:24.775324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.354 [2024-11-25 13:15:24.775392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.354 [2024-11-25 13:15:24.775459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.354 [2024-11-25 13:15:24.775463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.354 [2024-11-25 13:15:24.862565] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:27.354 [2024-11-25 13:15:24.862759] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:27.354 [2024-11-25 13:15:24.863108] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:27.354 [2024-11-25 13:15:24.863774] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:27.354 [2024-11-25 13:15:24.864009] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:27.354 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.354 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:27.354 13:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:28.291 13:15:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:28.857 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:28.857 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:28.857 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:28.857 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:28.857 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:28.857 Malloc1 00:16:29.116 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:29.375 13:15:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:29.634 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:29.892 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:29.892 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:29.892 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:30.150 Malloc2 00:16:30.150 13:15:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:30.408 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:30.666 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:30.924 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:30.924 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3153290 00:16:30.924 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3153290 ']' 00:16:30.924 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3153290 00:16:30.924 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:30.924 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.924 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3153290 00:16:31.182 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.182 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.182 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3153290' 00:16:31.182 killing process with pid 3153290 00:16:31.182 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3153290 00:16:31.182 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3153290 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:31.441 00:16:31.441 real 0m53.597s 00:16:31.441 user 3m26.913s 00:16:31.441 sys 0m3.974s 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:31.441 ************************************ 00:16:31.441 END TEST nvmf_vfio_user 00:16:31.441 ************************************ 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.441 ************************************ 00:16:31.441 START TEST nvmf_vfio_user_nvme_compliance 00:16:31.441 ************************************ 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:31.441 * Looking for test storage... 00:16:31.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:31.441 13:15:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.441 --rc genhtml_branch_coverage=1 00:16:31.441 --rc genhtml_function_coverage=1 00:16:31.441 --rc genhtml_legend=1 00:16:31.441 --rc geninfo_all_blocks=1 00:16:31.441 --rc geninfo_unexecuted_blocks=1 00:16:31.441 00:16:31.441 ' 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.441 --rc genhtml_branch_coverage=1 00:16:31.441 --rc genhtml_function_coverage=1 00:16:31.441 --rc genhtml_legend=1 00:16:31.441 --rc geninfo_all_blocks=1 00:16:31.441 --rc geninfo_unexecuted_blocks=1 00:16:31.441 00:16:31.441 ' 00:16:31.441 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:31.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.441 --rc genhtml_branch_coverage=1 00:16:31.441 --rc genhtml_function_coverage=1 00:16:31.441 --rc genhtml_legend=1 00:16:31.442 --rc geninfo_all_blocks=1 00:16:31.442 --rc geninfo_unexecuted_blocks=1 00:16:31.442 00:16:31.442 ' 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:31.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.442 --rc genhtml_branch_coverage=1 00:16:31.442 --rc genhtml_function_coverage=1 00:16:31.442 --rc genhtml_legend=1 00:16:31.442 --rc geninfo_all_blocks=1 00:16:31.442 --rc geninfo_unexecuted_blocks=1 00:16:31.442 00:16:31.442 ' 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:31.442 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3153901 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3153901' 00:16:31.716 Process pid: 3153901 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3153901 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3153901 ']' 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.716 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:31.716 [2024-11-25 13:15:29.143999] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:16:31.716 [2024-11-25 13:15:29.144099] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.716 [2024-11-25 13:15:29.211571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.716 [2024-11-25 13:15:29.269977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.716 [2024-11-25 13:15:29.270032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.716 [2024-11-25 13:15:29.270045] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.716 [2024-11-25 13:15:29.270056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.716 [2024-11-25 13:15:29.270066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.716 [2024-11-25 13:15:29.271549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.716 [2024-11-25 13:15:29.271613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.716 [2024-11-25 13:15:29.271616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.974 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.974 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:31.974 13:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.908 malloc0 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.908 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:33.167 00:16:33.167 00:16:33.167 CUnit - A unit testing framework for C - Version 2.1-3 00:16:33.167 http://cunit.sourceforge.net/ 00:16:33.167 00:16:33.167 00:16:33.167 Suite: nvme_compliance 00:16:33.167 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-25 13:15:30.641974] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.167 [2024-11-25 13:15:30.643453] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:33.167 [2024-11-25 13:15:30.643480] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:33.167 [2024-11-25 13:15:30.643493] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:33.167 [2024-11-25 13:15:30.644992] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.167 passed 00:16:33.167 Test: admin_identify_ctrlr_verify_fused ...[2024-11-25 13:15:30.730657] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.167 [2024-11-25 13:15:30.733681] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.167 passed 00:16:33.167 Test: admin_identify_ns ...[2024-11-25 13:15:30.820808] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.425 [2024-11-25 13:15:30.881324] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:33.425 [2024-11-25 13:15:30.889337] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:33.425 [2024-11-25 13:15:30.910443] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.425 passed 00:16:33.425 Test: admin_get_features_mandatory_features ...[2024-11-25 13:15:30.995439] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.425 [2024-11-25 13:15:30.998460] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.425 passed 00:16:33.425 Test: admin_get_features_optional_features ...[2024-11-25 13:15:31.080056] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.683 [2024-11-25 13:15:31.085094] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.683 passed 00:16:33.683 Test: admin_set_features_number_of_queues ...[2024-11-25 13:15:31.167793] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.683 [2024-11-25 13:15:31.272405] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.683 passed 00:16:33.940 Test: admin_get_log_page_mandatory_logs ...[2024-11-25 13:15:31.355905] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.940 [2024-11-25 13:15:31.358929] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.940 passed 00:16:33.940 Test: admin_get_log_page_with_lpo ...[2024-11-25 13:15:31.444064] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:33.940 [2024-11-25 13:15:31.512322] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:33.940 [2024-11-25 13:15:31.525394] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:33.940 passed 00:16:34.198 Test: fabric_property_get ...[2024-11-25 13:15:31.607952] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.198 [2024-11-25 13:15:31.609229] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:34.198 [2024-11-25 13:15:31.610978] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.198 passed 00:16:34.198 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-25 13:15:31.694501] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.198 [2024-11-25 13:15:31.695845] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:34.198 [2024-11-25 13:15:31.697522] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.198 passed 00:16:34.198 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-25 13:15:31.781878] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.456 [2024-11-25 13:15:31.869316] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:34.456 [2024-11-25 13:15:31.885324] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:34.456 [2024-11-25 13:15:31.890433] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.456 passed 00:16:34.456 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-25 13:15:31.974510] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.456 [2024-11-25 13:15:31.975843] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:34.456 [2024-11-25 13:15:31.977534] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.456 passed 00:16:34.456 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-25 13:15:32.060812] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.713 [2024-11-25 13:15:32.136317] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:34.713 [2024-11-25 13:15:32.160311] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:34.713 [2024-11-25 13:15:32.165422] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.713 passed 00:16:34.713 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-25 13:15:32.249525] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.713 [2024-11-25 13:15:32.250860] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:34.713 [2024-11-25 13:15:32.250900] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:34.713 [2024-11-25 13:15:32.252551] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.713 passed 00:16:34.713 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-25 13:15:32.336793] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.972 [2024-11-25 13:15:32.428311] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:34.972 [2024-11-25 13:15:32.436340] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:34.972 [2024-11-25 13:15:32.444314] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:34.972 [2024-11-25 13:15:32.452314] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:34.972 [2024-11-25 13:15:32.481430] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.972 passed 00:16:34.972 Test: admin_create_io_sq_verify_pc ...[2024-11-25 13:15:32.565136] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:34.972 [2024-11-25 13:15:32.581324] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:34.972 [2024-11-25 13:15:32.598582] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:34.972 passed 00:16:35.230 Test: admin_create_io_qp_max_qps ...[2024-11-25 13:15:32.681146] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.165 [2024-11-25 13:15:33.775334] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:36.729 [2024-11-25 13:15:34.172063] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.729 passed 00:16:36.729 Test: admin_create_io_sq_shared_cq ...[2024-11-25 13:15:34.254824] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:36.729 [2024-11-25 13:15:34.386312] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:36.986 [2024-11-25 13:15:34.422423] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:36.986 passed 00:16:36.986 00:16:36.986 Run Summary: Type Total Ran Passed Failed Inactive 00:16:36.986 suites 1 1 n/a 0 0 00:16:36.986 tests 18 18 18 0 0 00:16:36.986 asserts 360 360 360 0 n/a 00:16:36.986 00:16:36.986 Elapsed time = 1.566 seconds 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3153901 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3153901 ']' 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3153901 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3153901 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3153901' 00:16:36.986 killing process with pid 3153901 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3153901 00:16:36.986 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3153901 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:37.244 00:16:37.244 real 0m5.810s 00:16:37.244 user 0m16.369s 00:16:37.244 sys 0m0.513s 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:37.244 ************************************ 00:16:37.244 END TEST nvmf_vfio_user_nvme_compliance 00:16:37.244 ************************************ 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.244 ************************************ 00:16:37.244 START TEST nvmf_vfio_user_fuzz 00:16:37.244 ************************************ 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:37.244 * Looking for test storage... 00:16:37.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:37.244 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:37.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.504 --rc genhtml_branch_coverage=1 00:16:37.504 --rc genhtml_function_coverage=1 00:16:37.504 --rc genhtml_legend=1 00:16:37.504 --rc geninfo_all_blocks=1 00:16:37.504 --rc geninfo_unexecuted_blocks=1 00:16:37.504 00:16:37.504 ' 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:37.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.504 --rc genhtml_branch_coverage=1 00:16:37.504 --rc genhtml_function_coverage=1 00:16:37.504 --rc genhtml_legend=1 00:16:37.504 --rc geninfo_all_blocks=1 00:16:37.504 --rc geninfo_unexecuted_blocks=1 00:16:37.504 00:16:37.504 ' 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:37.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.504 --rc genhtml_branch_coverage=1 00:16:37.504 --rc genhtml_function_coverage=1 00:16:37.504 --rc genhtml_legend=1 00:16:37.504 --rc geninfo_all_blocks=1 00:16:37.504 --rc geninfo_unexecuted_blocks=1 00:16:37.504 00:16:37.504 ' 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:37.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.504 --rc genhtml_branch_coverage=1 00:16:37.504 --rc genhtml_function_coverage=1 00:16:37.504 --rc genhtml_legend=1 00:16:37.504 --rc geninfo_all_blocks=1 00:16:37.504 --rc geninfo_unexecuted_blocks=1 00:16:37.504 00:16:37.504 ' 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.504 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3154627 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3154627' 00:16:37.505 Process pid: 3154627 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3154627 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3154627 ']' 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.505 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:37.763 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.763 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:37.763 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.697 malloc0 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.697 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:38.698 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.698 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:38.698 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:10.761 Fuzzing completed. Shutting down the fuzz application 00:17:10.761 00:17:10.761 Dumping successful admin opcodes: 00:17:10.761 8, 9, 10, 24, 00:17:10.761 Dumping successful io opcodes: 00:17:10.761 0, 00:17:10.761 NS: 0x20000081ef00 I/O qp, Total commands completed: 673818, total successful commands: 2621, random_seed: 3433317568 00:17:10.761 NS: 0x20000081ef00 admin qp, Total commands completed: 128121, total successful commands: 1043, random_seed: 2527639360 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3154627 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3154627 ']' 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3154627 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154627 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154627' 00:17:10.761 killing process with pid 3154627 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3154627 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3154627 00:17:10.761 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:10.761 00:17:10.761 real 0m32.243s 00:17:10.761 user 0m30.534s 00:17:10.761 sys 0m30.296s 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:10.761 ************************************ 00:17:10.761 END TEST nvmf_vfio_user_fuzz 00:17:10.761 ************************************ 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.761 ************************************ 00:17:10.761 START TEST nvmf_auth_target 00:17:10.761 ************************************ 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:10.761 * Looking for test storage... 00:17:10.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.761 --rc genhtml_branch_coverage=1 00:17:10.761 --rc genhtml_function_coverage=1 00:17:10.761 --rc genhtml_legend=1 00:17:10.761 --rc geninfo_all_blocks=1 00:17:10.761 --rc geninfo_unexecuted_blocks=1 00:17:10.761 00:17:10.761 ' 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.761 --rc genhtml_branch_coverage=1 00:17:10.761 --rc genhtml_function_coverage=1 00:17:10.761 --rc genhtml_legend=1 00:17:10.761 --rc geninfo_all_blocks=1 00:17:10.761 --rc geninfo_unexecuted_blocks=1 00:17:10.761 00:17:10.761 ' 00:17:10.761 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.762 --rc genhtml_branch_coverage=1 00:17:10.762 --rc genhtml_function_coverage=1 00:17:10.762 --rc genhtml_legend=1 00:17:10.762 --rc geninfo_all_blocks=1 00:17:10.762 --rc geninfo_unexecuted_blocks=1 00:17:10.762 00:17:10.762 ' 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.762 --rc genhtml_branch_coverage=1 00:17:10.762 --rc genhtml_function_coverage=1 00:17:10.762 --rc genhtml_legend=1 00:17:10.762 --rc geninfo_all_blocks=1 00:17:10.762 --rc geninfo_unexecuted_blocks=1 00:17:10.762 00:17:10.762 ' 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.762 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:11.696 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:11.696 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.696 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:11.954 Found net devices under 0000:09:00.0: cvl_0_0 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:11.954 Found net devices under 0000:09:00.1: cvl_0_1 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:17:11.954 00:17:11.954 --- 10.0.0.2 ping statistics --- 00:17:11.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.954 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:17:11.954 00:17:11.954 --- 10.0.0.1 ping statistics --- 00:17:11.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.954 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:11.954 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3160091 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3160091 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3160091 ']' 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.955 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.211 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.211 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:12.211 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.211 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.211 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3160228 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=59570f91dffef4f44ef15aa55deb3ab690d224ba6cd01242 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PNz 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 59570f91dffef4f44ef15aa55deb3ab690d224ba6cd01242 0 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 59570f91dffef4f44ef15aa55deb3ab690d224ba6cd01242 0 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=59570f91dffef4f44ef15aa55deb3ab690d224ba6cd01242 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:12.212 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PNz 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PNz 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.PNz 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=788c4a01df6f6e2a2a0d06cf8b338dbc7b57b3cc9973cc3298238efe5837a855 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gbZ 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 788c4a01df6f6e2a2a0d06cf8b338dbc7b57b3cc9973cc3298238efe5837a855 3 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 788c4a01df6f6e2a2a0d06cf8b338dbc7b57b3cc9973cc3298238efe5837a855 3 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=788c4a01df6f6e2a2a0d06cf8b338dbc7b57b3cc9973cc3298238efe5837a855 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gbZ 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gbZ 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.gbZ 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b7d93b38004c6bda2fa53806928670ec 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.R7p 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b7d93b38004c6bda2fa53806928670ec 1 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b7d93b38004c6bda2fa53806928670ec 1 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b7d93b38004c6bda2fa53806928670ec 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.R7p 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.R7p 00:17:12.469 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.R7p 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=891da5b146f434c10a2ce67783e22a8086cb474a89b91ca6 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jFj 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 891da5b146f434c10a2ce67783e22a8086cb474a89b91ca6 2 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 891da5b146f434c10a2ce67783e22a8086cb474a89b91ca6 2 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=891da5b146f434c10a2ce67783e22a8086cb474a89b91ca6 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jFj 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jFj 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.jFj 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3453a9df627834fa7c3c08f07c27258289bb72839b4d4c89 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PzI 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3453a9df627834fa7c3c08f07c27258289bb72839b4d4c89 2 00:17:12.469 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3453a9df627834fa7c3c08f07c27258289bb72839b4d4c89 2 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3453a9df627834fa7c3c08f07c27258289bb72839b4d4c89 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PzI 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PzI 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.PzI 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7dc4ada71547ff08fea0b0283657aea2 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.C9t 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7dc4ada71547ff08fea0b0283657aea2 1 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7dc4ada71547ff08fea0b0283657aea2 1 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7dc4ada71547ff08fea0b0283657aea2 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:12.470 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.C9t 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.C9t 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.C9t 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ec021b5e346f0409647209448bbfe46227e65a62aeff47e2faa070bc29de7944 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DD4 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ec021b5e346f0409647209448bbfe46227e65a62aeff47e2faa070bc29de7944 3 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ec021b5e346f0409647209448bbfe46227e65a62aeff47e2faa070bc29de7944 3 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ec021b5e346f0409647209448bbfe46227e65a62aeff47e2faa070bc29de7944 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DD4 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DD4 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.DD4 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3160091 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3160091 ']' 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.727 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3160228 /var/tmp/host.sock 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3160228 ']' 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:12.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.984 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PNz 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.PNz 00:17:13.241 13:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.PNz 00:17:13.498 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.gbZ ]] 00:17:13.498 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gbZ 00:17:13.498 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.498 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.498 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.498 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gbZ 00:17:13.498 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gbZ 00:17:13.755 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:13.755 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.R7p 00:17:13.755 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.755 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.755 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.755 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.R7p 00:17:13.755 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.R7p 00:17:14.013 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.jFj ]] 00:17:14.013 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jFj 00:17:14.013 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.013 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.013 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.013 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jFj 00:17:14.013 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jFj 00:17:14.271 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:14.271 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.PzI 00:17:14.271 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.271 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.271 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.271 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.PzI 00:17:14.271 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.PzI 00:17:14.529 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.C9t ]] 00:17:14.529 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C9t 00:17:14.529 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.529 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.787 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.787 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C9t 00:17:14.787 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C9t 00:17:15.044 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:15.044 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DD4 00:17:15.044 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.045 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.045 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.045 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.DD4 00:17:15.045 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.DD4 00:17:15.302 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:15.302 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:15.302 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.302 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.302 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.302 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.589 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.868 00:17:15.868 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.868 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.868 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.126 { 00:17:16.126 "cntlid": 1, 00:17:16.126 "qid": 0, 00:17:16.126 "state": "enabled", 00:17:16.126 "thread": "nvmf_tgt_poll_group_000", 00:17:16.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:16.126 "listen_address": { 00:17:16.126 "trtype": "TCP", 00:17:16.126 "adrfam": "IPv4", 00:17:16.126 "traddr": "10.0.0.2", 00:17:16.126 "trsvcid": "4420" 00:17:16.126 }, 00:17:16.126 "peer_address": { 00:17:16.126 "trtype": "TCP", 00:17:16.126 "adrfam": "IPv4", 00:17:16.126 "traddr": "10.0.0.1", 00:17:16.126 "trsvcid": "58518" 00:17:16.126 }, 00:17:16.126 "auth": { 00:17:16.126 "state": "completed", 00:17:16.126 "digest": "sha256", 00:17:16.126 "dhgroup": "null" 00:17:16.126 } 00:17:16.126 } 00:17:16.126 ]' 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.126 13:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.384 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:16.384 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:17.316 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.316 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:17.316 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.316 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.316 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.316 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.316 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.316 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.573 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.831 00:17:18.089 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.089 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.089 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.347 { 00:17:18.347 "cntlid": 3, 00:17:18.347 "qid": 0, 00:17:18.347 "state": "enabled", 00:17:18.347 "thread": "nvmf_tgt_poll_group_000", 00:17:18.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:18.347 "listen_address": { 00:17:18.347 "trtype": "TCP", 00:17:18.347 "adrfam": "IPv4", 00:17:18.347 "traddr": "10.0.0.2", 00:17:18.347 "trsvcid": "4420" 00:17:18.347 }, 00:17:18.347 "peer_address": { 00:17:18.347 "trtype": "TCP", 00:17:18.347 "adrfam": "IPv4", 00:17:18.347 "traddr": "10.0.0.1", 00:17:18.347 "trsvcid": "58552" 00:17:18.347 }, 00:17:18.347 "auth": { 00:17:18.347 "state": "completed", 00:17:18.347 "digest": "sha256", 00:17:18.347 "dhgroup": "null" 00:17:18.347 } 00:17:18.347 } 00:17:18.347 ]' 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.347 13:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.604 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:18.604 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:19.536 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.536 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.536 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.536 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.536 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.536 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.536 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.536 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.793 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.794 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.051 00:17:20.051 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.051 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.051 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.309 { 00:17:20.309 "cntlid": 5, 00:17:20.309 "qid": 0, 00:17:20.309 "state": "enabled", 00:17:20.309 "thread": "nvmf_tgt_poll_group_000", 00:17:20.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:20.309 "listen_address": { 00:17:20.309 "trtype": "TCP", 00:17:20.309 "adrfam": "IPv4", 00:17:20.309 "traddr": "10.0.0.2", 00:17:20.309 "trsvcid": "4420" 00:17:20.309 }, 00:17:20.309 "peer_address": { 00:17:20.309 "trtype": "TCP", 00:17:20.309 "adrfam": "IPv4", 00:17:20.309 "traddr": "10.0.0.1", 00:17:20.309 "trsvcid": "60738" 00:17:20.309 }, 00:17:20.309 "auth": { 00:17:20.309 "state": "completed", 00:17:20.309 "digest": "sha256", 00:17:20.309 "dhgroup": "null" 00:17:20.309 } 00:17:20.309 } 00:17:20.309 ]' 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.309 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.567 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:20.567 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.567 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.567 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.567 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.825 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:20.825 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:21.761 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.761 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:21.761 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.761 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.761 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.761 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.761 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.761 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.018 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.274 00:17:22.274 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.274 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.274 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.532 { 00:17:22.532 "cntlid": 7, 00:17:22.532 "qid": 0, 00:17:22.532 "state": "enabled", 00:17:22.532 "thread": "nvmf_tgt_poll_group_000", 00:17:22.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:22.532 "listen_address": { 00:17:22.532 "trtype": "TCP", 00:17:22.532 "adrfam": "IPv4", 00:17:22.532 "traddr": "10.0.0.2", 00:17:22.532 "trsvcid": "4420" 00:17:22.532 }, 00:17:22.532 "peer_address": { 00:17:22.532 "trtype": "TCP", 00:17:22.532 "adrfam": "IPv4", 00:17:22.532 "traddr": "10.0.0.1", 00:17:22.532 "trsvcid": "60778" 00:17:22.532 }, 00:17:22.532 "auth": { 00:17:22.532 "state": "completed", 00:17:22.532 "digest": "sha256", 00:17:22.532 "dhgroup": "null" 00:17:22.532 } 00:17:22.532 } 00:17:22.532 ]' 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:22.532 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.789 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.789 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.789 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.047 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:23.047 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.979 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.237 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.495 00:17:24.495 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.495 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.495 13:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.753 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.753 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.753 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.753 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.753 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.753 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.753 { 00:17:24.753 "cntlid": 9, 00:17:24.753 "qid": 0, 00:17:24.753 "state": "enabled", 00:17:24.753 "thread": "nvmf_tgt_poll_group_000", 00:17:24.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:24.754 "listen_address": { 00:17:24.754 "trtype": "TCP", 00:17:24.754 "adrfam": "IPv4", 00:17:24.754 "traddr": "10.0.0.2", 00:17:24.754 "trsvcid": "4420" 00:17:24.754 }, 00:17:24.754 "peer_address": { 00:17:24.754 "trtype": "TCP", 00:17:24.754 "adrfam": "IPv4", 00:17:24.754 "traddr": "10.0.0.1", 00:17:24.754 "trsvcid": "60812" 00:17:24.754 }, 00:17:24.754 "auth": { 00:17:24.754 "state": "completed", 00:17:24.754 "digest": "sha256", 00:17:24.754 "dhgroup": "ffdhe2048" 00:17:24.754 } 00:17:24.754 } 00:17:24.754 ]' 00:17:24.754 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.754 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.754 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.754 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.754 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.754 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.754 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.754 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.011 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:25.011 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:25.945 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.945 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.945 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.945 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.945 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.945 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.945 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.945 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.203 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.461 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.461 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.461 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.461 13:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.719 00:17:26.719 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.719 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.719 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.977 { 00:17:26.977 "cntlid": 11, 00:17:26.977 "qid": 0, 00:17:26.977 "state": "enabled", 00:17:26.977 "thread": "nvmf_tgt_poll_group_000", 00:17:26.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:26.977 "listen_address": { 00:17:26.977 "trtype": "TCP", 00:17:26.977 "adrfam": "IPv4", 00:17:26.977 "traddr": "10.0.0.2", 00:17:26.977 "trsvcid": "4420" 00:17:26.977 }, 00:17:26.977 "peer_address": { 00:17:26.977 "trtype": "TCP", 00:17:26.977 "adrfam": "IPv4", 00:17:26.977 "traddr": "10.0.0.1", 00:17:26.977 "trsvcid": "60836" 00:17:26.977 }, 00:17:26.977 "auth": { 00:17:26.977 "state": "completed", 00:17:26.977 "digest": "sha256", 00:17:26.977 "dhgroup": "ffdhe2048" 00:17:26.977 } 00:17:26.977 } 00:17:26.977 ]' 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.977 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.235 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:27.235 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:28.170 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.170 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:28.170 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.170 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.170 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.170 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.170 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.170 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.435 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.001 00:17:29.001 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.001 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.001 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.001 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.001 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.001 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.001 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.259 { 00:17:29.259 "cntlid": 13, 00:17:29.259 "qid": 0, 00:17:29.259 "state": "enabled", 00:17:29.259 "thread": "nvmf_tgt_poll_group_000", 00:17:29.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:29.259 "listen_address": { 00:17:29.259 "trtype": "TCP", 00:17:29.259 "adrfam": "IPv4", 00:17:29.259 "traddr": "10.0.0.2", 00:17:29.259 "trsvcid": "4420" 00:17:29.259 }, 00:17:29.259 "peer_address": { 00:17:29.259 "trtype": "TCP", 00:17:29.259 "adrfam": "IPv4", 00:17:29.259 "traddr": "10.0.0.1", 00:17:29.259 "trsvcid": "47548" 00:17:29.259 }, 00:17:29.259 "auth": { 00:17:29.259 "state": "completed", 00:17:29.259 "digest": "sha256", 00:17:29.259 "dhgroup": "ffdhe2048" 00:17:29.259 } 00:17:29.259 } 00:17:29.259 ]' 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.259 13:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.518 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:29.518 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:30.452 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.452 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:30.452 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.452 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.452 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.452 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.452 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:30.452 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.711 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.969 00:17:30.969 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.969 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.969 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.227 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.227 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.227 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.227 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.227 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.227 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.227 { 00:17:31.227 "cntlid": 15, 00:17:31.227 "qid": 0, 00:17:31.227 "state": "enabled", 00:17:31.227 "thread": "nvmf_tgt_poll_group_000", 00:17:31.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:31.227 "listen_address": { 00:17:31.227 "trtype": "TCP", 00:17:31.227 "adrfam": "IPv4", 00:17:31.227 "traddr": "10.0.0.2", 00:17:31.227 "trsvcid": "4420" 00:17:31.227 }, 00:17:31.227 "peer_address": { 00:17:31.227 "trtype": "TCP", 00:17:31.227 "adrfam": "IPv4", 00:17:31.227 "traddr": "10.0.0.1", 00:17:31.227 "trsvcid": "47580" 00:17:31.227 }, 00:17:31.227 "auth": { 00:17:31.227 "state": "completed", 00:17:31.227 "digest": "sha256", 00:17:31.227 "dhgroup": "ffdhe2048" 00:17:31.227 } 00:17:31.227 } 00:17:31.227 ]' 00:17:31.227 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.486 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.486 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.486 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.486 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.486 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.486 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.486 13:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.743 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:31.743 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.676 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.935 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.193 00:17:33.451 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.451 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.451 13:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.709 { 00:17:33.709 "cntlid": 17, 00:17:33.709 "qid": 0, 00:17:33.709 "state": "enabled", 00:17:33.709 "thread": "nvmf_tgt_poll_group_000", 00:17:33.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:33.709 "listen_address": { 00:17:33.709 "trtype": "TCP", 00:17:33.709 "adrfam": "IPv4", 00:17:33.709 "traddr": "10.0.0.2", 00:17:33.709 "trsvcid": "4420" 00:17:33.709 }, 00:17:33.709 "peer_address": { 00:17:33.709 "trtype": "TCP", 00:17:33.709 "adrfam": "IPv4", 00:17:33.709 "traddr": "10.0.0.1", 00:17:33.709 "trsvcid": "47606" 00:17:33.709 }, 00:17:33.709 "auth": { 00:17:33.709 "state": "completed", 00:17:33.709 "digest": "sha256", 00:17:33.709 "dhgroup": "ffdhe3072" 00:17:33.709 } 00:17:33.709 } 00:17:33.709 ]' 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.709 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.968 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:33.968 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:34.903 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.903 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:34.903 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.903 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.903 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.903 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.903 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.903 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.162 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.728 00:17:35.728 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.728 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.728 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.986 { 00:17:35.986 "cntlid": 19, 00:17:35.986 "qid": 0, 00:17:35.986 "state": "enabled", 00:17:35.986 "thread": "nvmf_tgt_poll_group_000", 00:17:35.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:35.986 "listen_address": { 00:17:35.986 "trtype": "TCP", 00:17:35.986 "adrfam": "IPv4", 00:17:35.986 "traddr": "10.0.0.2", 00:17:35.986 "trsvcid": "4420" 00:17:35.986 }, 00:17:35.986 "peer_address": { 00:17:35.986 "trtype": "TCP", 00:17:35.986 "adrfam": "IPv4", 00:17:35.986 "traddr": "10.0.0.1", 00:17:35.986 "trsvcid": "47642" 00:17:35.986 }, 00:17:35.986 "auth": { 00:17:35.986 "state": "completed", 00:17:35.986 "digest": "sha256", 00:17:35.986 "dhgroup": "ffdhe3072" 00:17:35.986 } 00:17:35.986 } 00:17:35.986 ]' 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.986 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.244 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:36.245 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:37.180 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.180 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.180 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.180 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.180 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.180 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.180 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.180 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.438 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.697 00:17:37.697 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.697 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.697 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.262 { 00:17:38.262 "cntlid": 21, 00:17:38.262 "qid": 0, 00:17:38.262 "state": "enabled", 00:17:38.262 "thread": "nvmf_tgt_poll_group_000", 00:17:38.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:38.262 "listen_address": { 00:17:38.262 "trtype": "TCP", 00:17:38.262 "adrfam": "IPv4", 00:17:38.262 "traddr": "10.0.0.2", 00:17:38.262 "trsvcid": "4420" 00:17:38.262 }, 00:17:38.262 "peer_address": { 00:17:38.262 "trtype": "TCP", 00:17:38.262 "adrfam": "IPv4", 00:17:38.262 "traddr": "10.0.0.1", 00:17:38.262 "trsvcid": "47662" 00:17:38.262 }, 00:17:38.262 "auth": { 00:17:38.262 "state": "completed", 00:17:38.262 "digest": "sha256", 00:17:38.262 "dhgroup": "ffdhe3072" 00:17:38.262 } 00:17:38.262 } 00:17:38.262 ]' 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.262 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.520 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:38.520 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:39.453 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.453 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:39.453 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.453 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.453 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.453 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.453 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.453 13:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.712 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.970 00:17:39.970 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.970 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.970 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.259 { 00:17:40.259 "cntlid": 23, 00:17:40.259 "qid": 0, 00:17:40.259 "state": "enabled", 00:17:40.259 "thread": "nvmf_tgt_poll_group_000", 00:17:40.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:40.259 "listen_address": { 00:17:40.259 "trtype": "TCP", 00:17:40.259 "adrfam": "IPv4", 00:17:40.259 "traddr": "10.0.0.2", 00:17:40.259 "trsvcid": "4420" 00:17:40.259 }, 00:17:40.259 "peer_address": { 00:17:40.259 "trtype": "TCP", 00:17:40.259 "adrfam": "IPv4", 00:17:40.259 "traddr": "10.0.0.1", 00:17:40.259 "trsvcid": "56992" 00:17:40.259 }, 00:17:40.259 "auth": { 00:17:40.259 "state": "completed", 00:17:40.259 "digest": "sha256", 00:17:40.259 "dhgroup": "ffdhe3072" 00:17:40.259 } 00:17:40.259 } 00:17:40.259 ]' 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.259 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.541 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.541 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.541 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.541 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.541 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.799 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:40.799 13:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:41.731 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.731 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:41.731 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.731 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.731 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.731 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.731 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.731 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.732 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.989 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.247 00:17:42.247 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.247 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.247 13:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.505 { 00:17:42.505 "cntlid": 25, 00:17:42.505 "qid": 0, 00:17:42.505 "state": "enabled", 00:17:42.505 "thread": "nvmf_tgt_poll_group_000", 00:17:42.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:42.505 "listen_address": { 00:17:42.505 "trtype": "TCP", 00:17:42.505 "adrfam": "IPv4", 00:17:42.505 "traddr": "10.0.0.2", 00:17:42.505 "trsvcid": "4420" 00:17:42.505 }, 00:17:42.505 "peer_address": { 00:17:42.505 "trtype": "TCP", 00:17:42.505 "adrfam": "IPv4", 00:17:42.505 "traddr": "10.0.0.1", 00:17:42.505 "trsvcid": "57016" 00:17:42.505 }, 00:17:42.505 "auth": { 00:17:42.505 "state": "completed", 00:17:42.505 "digest": "sha256", 00:17:42.505 "dhgroup": "ffdhe4096" 00:17:42.505 } 00:17:42.505 } 00:17:42.505 ]' 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.505 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.763 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.763 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.763 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.764 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.764 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.021 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:43.021 13:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:43.954 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.954 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:43.954 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.954 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.954 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.954 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.954 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.954 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.212 13:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.470 00:17:44.470 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.470 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.470 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.728 { 00:17:44.728 "cntlid": 27, 00:17:44.728 "qid": 0, 00:17:44.728 "state": "enabled", 00:17:44.728 "thread": "nvmf_tgt_poll_group_000", 00:17:44.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:44.728 "listen_address": { 00:17:44.728 "trtype": "TCP", 00:17:44.728 "adrfam": "IPv4", 00:17:44.728 "traddr": "10.0.0.2", 00:17:44.728 "trsvcid": "4420" 00:17:44.728 }, 00:17:44.728 "peer_address": { 00:17:44.728 "trtype": "TCP", 00:17:44.728 "adrfam": "IPv4", 00:17:44.728 "traddr": "10.0.0.1", 00:17:44.728 "trsvcid": "57044" 00:17:44.728 }, 00:17:44.728 "auth": { 00:17:44.728 "state": "completed", 00:17:44.728 "digest": "sha256", 00:17:44.728 "dhgroup": "ffdhe4096" 00:17:44.728 } 00:17:44.728 } 00:17:44.728 ]' 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.728 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.986 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.986 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.986 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.986 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.986 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.244 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:45.244 13:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:46.194 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.194 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:46.194 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.194 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.194 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.194 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.195 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:46.195 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.453 13:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.710 00:17:46.710 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.710 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.710 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.967 { 00:17:46.967 "cntlid": 29, 00:17:46.967 "qid": 0, 00:17:46.967 "state": "enabled", 00:17:46.967 "thread": "nvmf_tgt_poll_group_000", 00:17:46.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:46.967 "listen_address": { 00:17:46.967 "trtype": "TCP", 00:17:46.967 "adrfam": "IPv4", 00:17:46.967 "traddr": "10.0.0.2", 00:17:46.967 "trsvcid": "4420" 00:17:46.967 }, 00:17:46.967 "peer_address": { 00:17:46.967 "trtype": "TCP", 00:17:46.967 "adrfam": "IPv4", 00:17:46.967 "traddr": "10.0.0.1", 00:17:46.967 "trsvcid": "57074" 00:17:46.967 }, 00:17:46.967 "auth": { 00:17:46.967 "state": "completed", 00:17:46.967 "digest": "sha256", 00:17:46.967 "dhgroup": "ffdhe4096" 00:17:46.967 } 00:17:46.967 } 00:17:46.967 ]' 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.967 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.234 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.234 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.234 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.234 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.234 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.499 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:47.499 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:48.432 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.432 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.432 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.433 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.433 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.433 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.433 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:48.433 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.690 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.948 00:17:48.948 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.948 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.948 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.206 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.464 { 00:17:49.464 "cntlid": 31, 00:17:49.464 "qid": 0, 00:17:49.464 "state": "enabled", 00:17:49.464 "thread": "nvmf_tgt_poll_group_000", 00:17:49.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:49.464 "listen_address": { 00:17:49.464 "trtype": "TCP", 00:17:49.464 "adrfam": "IPv4", 00:17:49.464 "traddr": "10.0.0.2", 00:17:49.464 "trsvcid": "4420" 00:17:49.464 }, 00:17:49.464 "peer_address": { 00:17:49.464 "trtype": "TCP", 00:17:49.464 "adrfam": "IPv4", 00:17:49.464 "traddr": "10.0.0.1", 00:17:49.464 "trsvcid": "55242" 00:17:49.464 }, 00:17:49.464 "auth": { 00:17:49.464 "state": "completed", 00:17:49.464 "digest": "sha256", 00:17:49.464 "dhgroup": "ffdhe4096" 00:17:49.464 } 00:17:49.464 } 00:17:49.464 ]' 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.464 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.723 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:49.723 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.655 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.913 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.914 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.914 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.478 00:17:51.478 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.478 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.478 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.734 { 00:17:51.734 "cntlid": 33, 00:17:51.734 "qid": 0, 00:17:51.734 "state": "enabled", 00:17:51.734 "thread": "nvmf_tgt_poll_group_000", 00:17:51.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:51.734 "listen_address": { 00:17:51.734 "trtype": "TCP", 00:17:51.734 "adrfam": "IPv4", 00:17:51.734 "traddr": "10.0.0.2", 00:17:51.734 "trsvcid": "4420" 00:17:51.734 }, 00:17:51.734 "peer_address": { 00:17:51.734 "trtype": "TCP", 00:17:51.734 "adrfam": "IPv4", 00:17:51.734 "traddr": "10.0.0.1", 00:17:51.734 "trsvcid": "55258" 00:17:51.734 }, 00:17:51.734 "auth": { 00:17:51.734 "state": "completed", 00:17:51.734 "digest": "sha256", 00:17:51.734 "dhgroup": "ffdhe6144" 00:17:51.734 } 00:17:51.734 } 00:17:51.734 ]' 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.734 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.298 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:52.298 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:17:52.860 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.123 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.123 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.123 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.123 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.123 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.124 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.124 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.380 13:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.943 00:17:53.943 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.943 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.943 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.200 { 00:17:54.200 "cntlid": 35, 00:17:54.200 "qid": 0, 00:17:54.200 "state": "enabled", 00:17:54.200 "thread": "nvmf_tgt_poll_group_000", 00:17:54.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:54.200 "listen_address": { 00:17:54.200 "trtype": "TCP", 00:17:54.200 "adrfam": "IPv4", 00:17:54.200 "traddr": "10.0.0.2", 00:17:54.200 "trsvcid": "4420" 00:17:54.200 }, 00:17:54.200 "peer_address": { 00:17:54.200 "trtype": "TCP", 00:17:54.200 "adrfam": "IPv4", 00:17:54.200 "traddr": "10.0.0.1", 00:17:54.200 "trsvcid": "55286" 00:17:54.200 }, 00:17:54.200 "auth": { 00:17:54.200 "state": "completed", 00:17:54.200 "digest": "sha256", 00:17:54.200 "dhgroup": "ffdhe6144" 00:17:54.200 } 00:17:54.200 } 00:17:54.200 ]' 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.200 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.456 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:54.456 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:17:55.387 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.387 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:55.387 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.387 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.387 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.387 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.387 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:55.387 13:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.644 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.210 00:17:56.210 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.210 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.210 13:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.468 { 00:17:56.468 "cntlid": 37, 00:17:56.468 "qid": 0, 00:17:56.468 "state": "enabled", 00:17:56.468 "thread": "nvmf_tgt_poll_group_000", 00:17:56.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:56.468 "listen_address": { 00:17:56.468 "trtype": "TCP", 00:17:56.468 "adrfam": "IPv4", 00:17:56.468 "traddr": "10.0.0.2", 00:17:56.468 "trsvcid": "4420" 00:17:56.468 }, 00:17:56.468 "peer_address": { 00:17:56.468 "trtype": "TCP", 00:17:56.468 "adrfam": "IPv4", 00:17:56.468 "traddr": "10.0.0.1", 00:17:56.468 "trsvcid": "55306" 00:17:56.468 }, 00:17:56.468 "auth": { 00:17:56.468 "state": "completed", 00:17:56.468 "digest": "sha256", 00:17:56.468 "dhgroup": "ffdhe6144" 00:17:56.468 } 00:17:56.468 } 00:17:56.468 ]' 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.468 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.726 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.726 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.726 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.726 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.726 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.985 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:56.985 13:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:17:57.919 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.919 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:57.919 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.919 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.920 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.920 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.920 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.920 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.214 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.780 00:17:58.780 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.780 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.780 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.038 { 00:17:59.038 "cntlid": 39, 00:17:59.038 "qid": 0, 00:17:59.038 "state": "enabled", 00:17:59.038 "thread": "nvmf_tgt_poll_group_000", 00:17:59.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:59.038 "listen_address": { 00:17:59.038 "trtype": "TCP", 00:17:59.038 "adrfam": "IPv4", 00:17:59.038 "traddr": "10.0.0.2", 00:17:59.038 "trsvcid": "4420" 00:17:59.038 }, 00:17:59.038 "peer_address": { 00:17:59.038 "trtype": "TCP", 00:17:59.038 "adrfam": "IPv4", 00:17:59.038 "traddr": "10.0.0.1", 00:17:59.038 "trsvcid": "55332" 00:17:59.038 }, 00:17:59.038 "auth": { 00:17:59.038 "state": "completed", 00:17:59.038 "digest": "sha256", 00:17:59.038 "dhgroup": "ffdhe6144" 00:17:59.038 } 00:17:59.038 } 00:17:59.038 ]' 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.038 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.296 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:17:59.296 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.236 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.517 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.452 00:18:01.452 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.452 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.452 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.712 { 00:18:01.712 "cntlid": 41, 00:18:01.712 "qid": 0, 00:18:01.712 "state": "enabled", 00:18:01.712 "thread": "nvmf_tgt_poll_group_000", 00:18:01.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:01.712 "listen_address": { 00:18:01.712 "trtype": "TCP", 00:18:01.712 "adrfam": "IPv4", 00:18:01.712 "traddr": "10.0.0.2", 00:18:01.712 "trsvcid": "4420" 00:18:01.712 }, 00:18:01.712 "peer_address": { 00:18:01.712 "trtype": "TCP", 00:18:01.712 "adrfam": "IPv4", 00:18:01.712 "traddr": "10.0.0.1", 00:18:01.712 "trsvcid": "41624" 00:18:01.712 }, 00:18:01.712 "auth": { 00:18:01.712 "state": "completed", 00:18:01.712 "digest": "sha256", 00:18:01.712 "dhgroup": "ffdhe8192" 00:18:01.712 } 00:18:01.712 } 00:18:01.712 ]' 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.712 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.278 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:02.278 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.211 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.145 00:18:04.145 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.145 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.145 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.404 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.404 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.404 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.404 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.404 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.404 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.404 { 00:18:04.404 "cntlid": 43, 00:18:04.404 "qid": 0, 00:18:04.404 "state": "enabled", 00:18:04.404 "thread": "nvmf_tgt_poll_group_000", 00:18:04.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:04.404 "listen_address": { 00:18:04.404 "trtype": "TCP", 00:18:04.404 "adrfam": "IPv4", 00:18:04.404 "traddr": "10.0.0.2", 00:18:04.404 "trsvcid": "4420" 00:18:04.404 }, 00:18:04.404 "peer_address": { 00:18:04.404 "trtype": "TCP", 00:18:04.404 "adrfam": "IPv4", 00:18:04.404 "traddr": "10.0.0.1", 00:18:04.404 "trsvcid": "41652" 00:18:04.404 }, 00:18:04.404 "auth": { 00:18:04.404 "state": "completed", 00:18:04.404 "digest": "sha256", 00:18:04.404 "dhgroup": "ffdhe8192" 00:18:04.404 } 00:18:04.404 } 00:18:04.404 ]' 00:18:04.404 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.404 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.404 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.404 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.404 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.661 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.661 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.661 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.920 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:04.920 13:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:05.854 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.854 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:05.854 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.854 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.854 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.854 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.854 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:05.854 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.111 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:06.111 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.111 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.111 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.112 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.045 00:18:07.045 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.045 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.045 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.302 { 00:18:07.302 "cntlid": 45, 00:18:07.302 "qid": 0, 00:18:07.302 "state": "enabled", 00:18:07.302 "thread": "nvmf_tgt_poll_group_000", 00:18:07.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:07.302 "listen_address": { 00:18:07.302 "trtype": "TCP", 00:18:07.302 "adrfam": "IPv4", 00:18:07.302 "traddr": "10.0.0.2", 00:18:07.302 "trsvcid": "4420" 00:18:07.302 }, 00:18:07.302 "peer_address": { 00:18:07.302 "trtype": "TCP", 00:18:07.302 "adrfam": "IPv4", 00:18:07.302 "traddr": "10.0.0.1", 00:18:07.302 "trsvcid": "41666" 00:18:07.302 }, 00:18:07.302 "auth": { 00:18:07.302 "state": "completed", 00:18:07.302 "digest": "sha256", 00:18:07.302 "dhgroup": "ffdhe8192" 00:18:07.302 } 00:18:07.302 } 00:18:07.302 ]' 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.302 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.303 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.303 13:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.868 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:07.868 13:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.801 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.741 00:18:09.741 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.741 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.741 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.999 { 00:18:09.999 "cntlid": 47, 00:18:09.999 "qid": 0, 00:18:09.999 "state": "enabled", 00:18:09.999 "thread": "nvmf_tgt_poll_group_000", 00:18:09.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:09.999 "listen_address": { 00:18:09.999 "trtype": "TCP", 00:18:09.999 "adrfam": "IPv4", 00:18:09.999 "traddr": "10.0.0.2", 00:18:09.999 "trsvcid": "4420" 00:18:09.999 }, 00:18:09.999 "peer_address": { 00:18:09.999 "trtype": "TCP", 00:18:09.999 "adrfam": "IPv4", 00:18:09.999 "traddr": "10.0.0.1", 00:18:09.999 "trsvcid": "60470" 00:18:09.999 }, 00:18:09.999 "auth": { 00:18:09.999 "state": "completed", 00:18:09.999 "digest": "sha256", 00:18:09.999 "dhgroup": "ffdhe8192" 00:18:09.999 } 00:18:09.999 } 00:18:09.999 ]' 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.999 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.000 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.000 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.000 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.000 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.000 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.258 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:10.258 13:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.191 13:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.448 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.012 00:18:12.012 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.012 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.012 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.269 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.269 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.269 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.269 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.269 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.269 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.270 { 00:18:12.270 "cntlid": 49, 00:18:12.270 "qid": 0, 00:18:12.270 "state": "enabled", 00:18:12.270 "thread": "nvmf_tgt_poll_group_000", 00:18:12.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:12.270 "listen_address": { 00:18:12.270 "trtype": "TCP", 00:18:12.270 "adrfam": "IPv4", 00:18:12.270 "traddr": "10.0.0.2", 00:18:12.270 "trsvcid": "4420" 00:18:12.270 }, 00:18:12.270 "peer_address": { 00:18:12.270 "trtype": "TCP", 00:18:12.270 "adrfam": "IPv4", 00:18:12.270 "traddr": "10.0.0.1", 00:18:12.270 "trsvcid": "60478" 00:18:12.270 }, 00:18:12.270 "auth": { 00:18:12.270 "state": "completed", 00:18:12.270 "digest": "sha384", 00:18:12.270 "dhgroup": "null" 00:18:12.270 } 00:18:12.270 } 00:18:12.270 ]' 00:18:12.270 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.270 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.270 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.270 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:12.270 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.270 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.270 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.270 13:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.527 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:12.527 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:13.461 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.461 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.461 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.461 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.461 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.461 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.461 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.461 13:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.719 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.720 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.720 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.720 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.720 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.978 00:18:13.978 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.978 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.978 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.236 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.236 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.236 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.236 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.236 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.236 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.236 { 00:18:14.236 "cntlid": 51, 00:18:14.236 "qid": 0, 00:18:14.236 "state": "enabled", 00:18:14.236 "thread": "nvmf_tgt_poll_group_000", 00:18:14.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:14.236 "listen_address": { 00:18:14.236 "trtype": "TCP", 00:18:14.236 "adrfam": "IPv4", 00:18:14.236 "traddr": "10.0.0.2", 00:18:14.236 "trsvcid": "4420" 00:18:14.236 }, 00:18:14.236 "peer_address": { 00:18:14.236 "trtype": "TCP", 00:18:14.236 "adrfam": "IPv4", 00:18:14.236 "traddr": "10.0.0.1", 00:18:14.236 "trsvcid": "60512" 00:18:14.236 }, 00:18:14.236 "auth": { 00:18:14.236 "state": "completed", 00:18:14.236 "digest": "sha384", 00:18:14.236 "dhgroup": "null" 00:18:14.236 } 00:18:14.236 } 00:18:14.236 ]' 00:18:14.236 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.494 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.494 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.494 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:14.494 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.494 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.494 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.494 13:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.751 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:14.751 13:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:15.685 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.685 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:15.685 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.686 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.686 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.686 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.686 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.686 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.944 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.202 00:18:16.202 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.202 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.202 13:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.767 { 00:18:16.767 "cntlid": 53, 00:18:16.767 "qid": 0, 00:18:16.767 "state": "enabled", 00:18:16.767 "thread": "nvmf_tgt_poll_group_000", 00:18:16.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:16.767 "listen_address": { 00:18:16.767 "trtype": "TCP", 00:18:16.767 "adrfam": "IPv4", 00:18:16.767 "traddr": "10.0.0.2", 00:18:16.767 "trsvcid": "4420" 00:18:16.767 }, 00:18:16.767 "peer_address": { 00:18:16.767 "trtype": "TCP", 00:18:16.767 "adrfam": "IPv4", 00:18:16.767 "traddr": "10.0.0.1", 00:18:16.767 "trsvcid": "60542" 00:18:16.767 }, 00:18:16.767 "auth": { 00:18:16.767 "state": "completed", 00:18:16.767 "digest": "sha384", 00:18:16.767 "dhgroup": "null" 00:18:16.767 } 00:18:16.767 } 00:18:16.767 ]' 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.767 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.025 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:17.025 13:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:17.959 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.959 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:17.959 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.960 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.960 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.960 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.960 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.960 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.218 13:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.476 00:18:18.476 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.476 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.476 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.737 { 00:18:18.737 "cntlid": 55, 00:18:18.737 "qid": 0, 00:18:18.737 "state": "enabled", 00:18:18.737 "thread": "nvmf_tgt_poll_group_000", 00:18:18.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:18.737 "listen_address": { 00:18:18.737 "trtype": "TCP", 00:18:18.737 "adrfam": "IPv4", 00:18:18.737 "traddr": "10.0.0.2", 00:18:18.737 "trsvcid": "4420" 00:18:18.737 }, 00:18:18.737 "peer_address": { 00:18:18.737 "trtype": "TCP", 00:18:18.737 "adrfam": "IPv4", 00:18:18.737 "traddr": "10.0.0.1", 00:18:18.737 "trsvcid": "60584" 00:18:18.737 }, 00:18:18.737 "auth": { 00:18:18.737 "state": "completed", 00:18:18.737 "digest": "sha384", 00:18:18.737 "dhgroup": "null" 00:18:18.737 } 00:18:18.737 } 00:18:18.737 ]' 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.737 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.032 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:19.032 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.032 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.032 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.032 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.316 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:19.316 13:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.251 13:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.509 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.766 00:18:21.022 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.022 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.022 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.279 { 00:18:21.279 "cntlid": 57, 00:18:21.279 "qid": 0, 00:18:21.279 "state": "enabled", 00:18:21.279 "thread": "nvmf_tgt_poll_group_000", 00:18:21.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:21.279 "listen_address": { 00:18:21.279 "trtype": "TCP", 00:18:21.279 "adrfam": "IPv4", 00:18:21.279 "traddr": "10.0.0.2", 00:18:21.279 "trsvcid": "4420" 00:18:21.279 }, 00:18:21.279 "peer_address": { 00:18:21.279 "trtype": "TCP", 00:18:21.279 "adrfam": "IPv4", 00:18:21.279 "traddr": "10.0.0.1", 00:18:21.279 "trsvcid": "38846" 00:18:21.279 }, 00:18:21.279 "auth": { 00:18:21.279 "state": "completed", 00:18:21.279 "digest": "sha384", 00:18:21.279 "dhgroup": "ffdhe2048" 00:18:21.279 } 00:18:21.279 } 00:18:21.279 ]' 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.279 13:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.537 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:21.537 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:22.468 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.468 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:22.468 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.468 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.468 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.468 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.468 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.468 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.726 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.984 00:18:22.984 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.984 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.984 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.550 { 00:18:23.550 "cntlid": 59, 00:18:23.550 "qid": 0, 00:18:23.550 "state": "enabled", 00:18:23.550 "thread": "nvmf_tgt_poll_group_000", 00:18:23.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:23.550 "listen_address": { 00:18:23.550 "trtype": "TCP", 00:18:23.550 "adrfam": "IPv4", 00:18:23.550 "traddr": "10.0.0.2", 00:18:23.550 "trsvcid": "4420" 00:18:23.550 }, 00:18:23.550 "peer_address": { 00:18:23.550 "trtype": "TCP", 00:18:23.550 "adrfam": "IPv4", 00:18:23.550 "traddr": "10.0.0.1", 00:18:23.550 "trsvcid": "38876" 00:18:23.550 }, 00:18:23.550 "auth": { 00:18:23.550 "state": "completed", 00:18:23.550 "digest": "sha384", 00:18:23.550 "dhgroup": "ffdhe2048" 00:18:23.550 } 00:18:23.550 } 00:18:23.550 ]' 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.550 13:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.550 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.550 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.550 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.808 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:23.808 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:24.740 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.740 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:24.740 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.740 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.740 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.740 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.740 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.740 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.996 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:24.996 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.996 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.996 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:24.996 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.996 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.997 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.997 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.997 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.997 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.997 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.997 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.997 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.254 00:18:25.254 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.254 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.254 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.513 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.513 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.513 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.513 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.513 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.513 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.513 { 00:18:25.513 "cntlid": 61, 00:18:25.513 "qid": 0, 00:18:25.513 "state": "enabled", 00:18:25.513 "thread": "nvmf_tgt_poll_group_000", 00:18:25.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:25.513 "listen_address": { 00:18:25.513 "trtype": "TCP", 00:18:25.513 "adrfam": "IPv4", 00:18:25.513 "traddr": "10.0.0.2", 00:18:25.513 "trsvcid": "4420" 00:18:25.513 }, 00:18:25.513 "peer_address": { 00:18:25.513 "trtype": "TCP", 00:18:25.513 "adrfam": "IPv4", 00:18:25.513 "traddr": "10.0.0.1", 00:18:25.513 "trsvcid": "38900" 00:18:25.513 }, 00:18:25.513 "auth": { 00:18:25.513 "state": "completed", 00:18:25.513 "digest": "sha384", 00:18:25.513 "dhgroup": "ffdhe2048" 00:18:25.513 } 00:18:25.513 } 00:18:25.513 ]' 00:18:25.513 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.771 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.771 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.771 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.771 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.771 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.771 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.771 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.028 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:26.028 13:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:26.970 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.970 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:26.970 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.970 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.970 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.970 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.970 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.970 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.231 13:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.488 00:18:27.488 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.488 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.488 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.746 { 00:18:27.746 "cntlid": 63, 00:18:27.746 "qid": 0, 00:18:27.746 "state": "enabled", 00:18:27.746 "thread": "nvmf_tgt_poll_group_000", 00:18:27.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:27.746 "listen_address": { 00:18:27.746 "trtype": "TCP", 00:18:27.746 "adrfam": "IPv4", 00:18:27.746 "traddr": "10.0.0.2", 00:18:27.746 "trsvcid": "4420" 00:18:27.746 }, 00:18:27.746 "peer_address": { 00:18:27.746 "trtype": "TCP", 00:18:27.746 "adrfam": "IPv4", 00:18:27.746 "traddr": "10.0.0.1", 00:18:27.746 "trsvcid": "38924" 00:18:27.746 }, 00:18:27.746 "auth": { 00:18:27.746 "state": "completed", 00:18:27.746 "digest": "sha384", 00:18:27.746 "dhgroup": "ffdhe2048" 00:18:27.746 } 00:18:27.746 } 00:18:27.746 ]' 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.746 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.004 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.004 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.004 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.004 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.004 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.262 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:28.262 13:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:29.195 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.454 13:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.712 00:18:29.712 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.712 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.712 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.970 { 00:18:29.970 "cntlid": 65, 00:18:29.970 "qid": 0, 00:18:29.970 "state": "enabled", 00:18:29.970 "thread": "nvmf_tgt_poll_group_000", 00:18:29.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:29.970 "listen_address": { 00:18:29.970 "trtype": "TCP", 00:18:29.970 "adrfam": "IPv4", 00:18:29.970 "traddr": "10.0.0.2", 00:18:29.970 "trsvcid": "4420" 00:18:29.970 }, 00:18:29.970 "peer_address": { 00:18:29.970 "trtype": "TCP", 00:18:29.970 "adrfam": "IPv4", 00:18:29.970 "traddr": "10.0.0.1", 00:18:29.970 "trsvcid": "50106" 00:18:29.970 }, 00:18:29.970 "auth": { 00:18:29.970 "state": "completed", 00:18:29.970 "digest": "sha384", 00:18:29.970 "dhgroup": "ffdhe3072" 00:18:29.970 } 00:18:29.970 } 00:18:29.970 ]' 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.970 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.228 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.228 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.228 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.228 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.228 13:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.496 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:30.496 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:31.434 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.434 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.434 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.434 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.434 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.434 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.434 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.434 13:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.696 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.953 00:18:31.953 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.953 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.954 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.212 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.212 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.212 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.212 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.212 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.471 { 00:18:32.471 "cntlid": 67, 00:18:32.471 "qid": 0, 00:18:32.471 "state": "enabled", 00:18:32.471 "thread": "nvmf_tgt_poll_group_000", 00:18:32.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:32.471 "listen_address": { 00:18:32.471 "trtype": "TCP", 00:18:32.471 "adrfam": "IPv4", 00:18:32.471 "traddr": "10.0.0.2", 00:18:32.471 "trsvcid": "4420" 00:18:32.471 }, 00:18:32.471 "peer_address": { 00:18:32.471 "trtype": "TCP", 00:18:32.471 "adrfam": "IPv4", 00:18:32.471 "traddr": "10.0.0.1", 00:18:32.471 "trsvcid": "50128" 00:18:32.471 }, 00:18:32.471 "auth": { 00:18:32.471 "state": "completed", 00:18:32.471 "digest": "sha384", 00:18:32.471 "dhgroup": "ffdhe3072" 00:18:32.471 } 00:18:32.471 } 00:18:32.471 ]' 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.471 13:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.728 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:32.728 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:33.662 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.662 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.662 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.662 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.662 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.662 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.662 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.662 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:33.920 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:33.920 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.920 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.920 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:33.920 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.921 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.921 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.921 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.921 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.921 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.921 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.921 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.921 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.486 00:18:34.486 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.486 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.486 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.486 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.486 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.486 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.486 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.486 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.486 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.486 { 00:18:34.486 "cntlid": 69, 00:18:34.486 "qid": 0, 00:18:34.486 "state": "enabled", 00:18:34.486 "thread": "nvmf_tgt_poll_group_000", 00:18:34.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:34.486 "listen_address": { 00:18:34.486 "trtype": "TCP", 00:18:34.486 "adrfam": "IPv4", 00:18:34.486 "traddr": "10.0.0.2", 00:18:34.486 "trsvcid": "4420" 00:18:34.486 }, 00:18:34.486 "peer_address": { 00:18:34.486 "trtype": "TCP", 00:18:34.486 "adrfam": "IPv4", 00:18:34.486 "traddr": "10.0.0.1", 00:18:34.486 "trsvcid": "50150" 00:18:34.486 }, 00:18:34.486 "auth": { 00:18:34.486 "state": "completed", 00:18:34.486 "digest": "sha384", 00:18:34.486 "dhgroup": "ffdhe3072" 00:18:34.486 } 00:18:34.486 } 00:18:34.486 ]' 00:18:34.486 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.743 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.743 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.743 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.743 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.743 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.743 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.743 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.001 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:35.001 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:35.930 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.931 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:35.931 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.931 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.931 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.931 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.931 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.931 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.191 13:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.448 00:18:36.448 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.448 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.448 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.705 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.705 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.705 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.705 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.705 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.705 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.705 { 00:18:36.705 "cntlid": 71, 00:18:36.705 "qid": 0, 00:18:36.705 "state": "enabled", 00:18:36.705 "thread": "nvmf_tgt_poll_group_000", 00:18:36.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:36.705 "listen_address": { 00:18:36.705 "trtype": "TCP", 00:18:36.705 "adrfam": "IPv4", 00:18:36.705 "traddr": "10.0.0.2", 00:18:36.705 "trsvcid": "4420" 00:18:36.705 }, 00:18:36.705 "peer_address": { 00:18:36.705 "trtype": "TCP", 00:18:36.705 "adrfam": "IPv4", 00:18:36.705 "traddr": "10.0.0.1", 00:18:36.705 "trsvcid": "50184" 00:18:36.705 }, 00:18:36.705 "auth": { 00:18:36.705 "state": "completed", 00:18:36.705 "digest": "sha384", 00:18:36.705 "dhgroup": "ffdhe3072" 00:18:36.705 } 00:18:36.705 } 00:18:36.705 ]' 00:18:36.705 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.961 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.961 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.961 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.961 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.961 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.961 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.961 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.219 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:37.219 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.152 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.441 13:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.699 00:18:38.699 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.699 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.699 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.957 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.957 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.957 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.957 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.957 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.957 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.957 { 00:18:38.957 "cntlid": 73, 00:18:38.957 "qid": 0, 00:18:38.957 "state": "enabled", 00:18:38.957 "thread": "nvmf_tgt_poll_group_000", 00:18:38.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:38.957 "listen_address": { 00:18:38.957 "trtype": "TCP", 00:18:38.957 "adrfam": "IPv4", 00:18:38.957 "traddr": "10.0.0.2", 00:18:38.957 "trsvcid": "4420" 00:18:38.957 }, 00:18:38.957 "peer_address": { 00:18:38.957 "trtype": "TCP", 00:18:38.957 "adrfam": "IPv4", 00:18:38.957 "traddr": "10.0.0.1", 00:18:38.957 "trsvcid": "34972" 00:18:38.957 }, 00:18:38.957 "auth": { 00:18:38.957 "state": "completed", 00:18:38.957 "digest": "sha384", 00:18:38.957 "dhgroup": "ffdhe4096" 00:18:38.957 } 00:18:38.957 } 00:18:38.957 ]' 00:18:38.957 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.957 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.958 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.222 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.222 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.222 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.222 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.222 13:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.480 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:39.480 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:40.411 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.411 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:40.412 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.412 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.412 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.412 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.412 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.412 13:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.669 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.234 00:18:41.234 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.234 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.234 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.492 { 00:18:41.492 "cntlid": 75, 00:18:41.492 "qid": 0, 00:18:41.492 "state": "enabled", 00:18:41.492 "thread": "nvmf_tgt_poll_group_000", 00:18:41.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:41.492 "listen_address": { 00:18:41.492 "trtype": "TCP", 00:18:41.492 "adrfam": "IPv4", 00:18:41.492 "traddr": "10.0.0.2", 00:18:41.492 "trsvcid": "4420" 00:18:41.492 }, 00:18:41.492 "peer_address": { 00:18:41.492 "trtype": "TCP", 00:18:41.492 "adrfam": "IPv4", 00:18:41.492 "traddr": "10.0.0.1", 00:18:41.492 "trsvcid": "34988" 00:18:41.492 }, 00:18:41.492 "auth": { 00:18:41.492 "state": "completed", 00:18:41.492 "digest": "sha384", 00:18:41.492 "dhgroup": "ffdhe4096" 00:18:41.492 } 00:18:41.492 } 00:18:41.492 ]' 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.492 13:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.492 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:41.492 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.492 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.492 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.493 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.750 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:41.750 13:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:42.682 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.682 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.682 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.682 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.682 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.682 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.682 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.682 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.941 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.507 00:18:43.507 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.507 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.507 13:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.765 { 00:18:43.765 "cntlid": 77, 00:18:43.765 "qid": 0, 00:18:43.765 "state": "enabled", 00:18:43.765 "thread": "nvmf_tgt_poll_group_000", 00:18:43.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:43.765 "listen_address": { 00:18:43.765 "trtype": "TCP", 00:18:43.765 "adrfam": "IPv4", 00:18:43.765 "traddr": "10.0.0.2", 00:18:43.765 "trsvcid": "4420" 00:18:43.765 }, 00:18:43.765 "peer_address": { 00:18:43.765 "trtype": "TCP", 00:18:43.765 "adrfam": "IPv4", 00:18:43.765 "traddr": "10.0.0.1", 00:18:43.765 "trsvcid": "35022" 00:18:43.765 }, 00:18:43.765 "auth": { 00:18:43.765 "state": "completed", 00:18:43.765 "digest": "sha384", 00:18:43.765 "dhgroup": "ffdhe4096" 00:18:43.765 } 00:18:43.765 } 00:18:43.765 ]' 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.765 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.766 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.024 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:44.024 13:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:44.959 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.959 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:44.959 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.959 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.959 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.959 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.959 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.959 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.218 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.783 00:18:45.783 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.783 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.783 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.042 { 00:18:46.042 "cntlid": 79, 00:18:46.042 "qid": 0, 00:18:46.042 "state": "enabled", 00:18:46.042 "thread": "nvmf_tgt_poll_group_000", 00:18:46.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:46.042 "listen_address": { 00:18:46.042 "trtype": "TCP", 00:18:46.042 "adrfam": "IPv4", 00:18:46.042 "traddr": "10.0.0.2", 00:18:46.042 "trsvcid": "4420" 00:18:46.042 }, 00:18:46.042 "peer_address": { 00:18:46.042 "trtype": "TCP", 00:18:46.042 "adrfam": "IPv4", 00:18:46.042 "traddr": "10.0.0.1", 00:18:46.042 "trsvcid": "35056" 00:18:46.042 }, 00:18:46.042 "auth": { 00:18:46.042 "state": "completed", 00:18:46.042 "digest": "sha384", 00:18:46.042 "dhgroup": "ffdhe4096" 00:18:46.042 } 00:18:46.042 } 00:18:46.042 ]' 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.042 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.300 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:46.300 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.233 13:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.492 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.056 00:18:48.056 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.056 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.056 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.314 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.314 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.314 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.314 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.314 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.314 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.314 { 00:18:48.314 "cntlid": 81, 00:18:48.314 "qid": 0, 00:18:48.314 "state": "enabled", 00:18:48.314 "thread": "nvmf_tgt_poll_group_000", 00:18:48.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:48.314 "listen_address": { 00:18:48.314 "trtype": "TCP", 00:18:48.314 "adrfam": "IPv4", 00:18:48.314 "traddr": "10.0.0.2", 00:18:48.314 "trsvcid": "4420" 00:18:48.314 }, 00:18:48.315 "peer_address": { 00:18:48.315 "trtype": "TCP", 00:18:48.315 "adrfam": "IPv4", 00:18:48.315 "traddr": "10.0.0.1", 00:18:48.315 "trsvcid": "35086" 00:18:48.315 }, 00:18:48.315 "auth": { 00:18:48.315 "state": "completed", 00:18:48.315 "digest": "sha384", 00:18:48.315 "dhgroup": "ffdhe6144" 00:18:48.315 } 00:18:48.315 } 00:18:48.315 ]' 00:18:48.315 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.315 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.571 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.571 13:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.571 13:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.571 13:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.571 13:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.571 13:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.828 13:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:48.828 13:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:49.762 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.762 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.762 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.762 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.762 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.762 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.762 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.762 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.021 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.586 00:18:50.586 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.586 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.586 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.845 { 00:18:50.845 "cntlid": 83, 00:18:50.845 "qid": 0, 00:18:50.845 "state": "enabled", 00:18:50.845 "thread": "nvmf_tgt_poll_group_000", 00:18:50.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:50.845 "listen_address": { 00:18:50.845 "trtype": "TCP", 00:18:50.845 "adrfam": "IPv4", 00:18:50.845 "traddr": "10.0.0.2", 00:18:50.845 "trsvcid": "4420" 00:18:50.845 }, 00:18:50.845 "peer_address": { 00:18:50.845 "trtype": "TCP", 00:18:50.845 "adrfam": "IPv4", 00:18:50.845 "traddr": "10.0.0.1", 00:18:50.845 "trsvcid": "34220" 00:18:50.845 }, 00:18:50.845 "auth": { 00:18:50.845 "state": "completed", 00:18:50.845 "digest": "sha384", 00:18:50.845 "dhgroup": "ffdhe6144" 00:18:50.845 } 00:18:50.845 } 00:18:50.845 ]' 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.845 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.103 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:51.103 13:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:18:52.035 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.035 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:52.035 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.035 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.035 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.035 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.035 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.035 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.293 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.224 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.224 { 00:18:53.224 "cntlid": 85, 00:18:53.224 "qid": 0, 00:18:53.224 "state": "enabled", 00:18:53.224 "thread": "nvmf_tgt_poll_group_000", 00:18:53.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:53.224 "listen_address": { 00:18:53.224 "trtype": "TCP", 00:18:53.224 "adrfam": "IPv4", 00:18:53.224 "traddr": "10.0.0.2", 00:18:53.224 "trsvcid": "4420" 00:18:53.224 }, 00:18:53.224 "peer_address": { 00:18:53.224 "trtype": "TCP", 00:18:53.224 "adrfam": "IPv4", 00:18:53.224 "traddr": "10.0.0.1", 00:18:53.224 "trsvcid": "34244" 00:18:53.224 }, 00:18:53.224 "auth": { 00:18:53.224 "state": "completed", 00:18:53.224 "digest": "sha384", 00:18:53.224 "dhgroup": "ffdhe6144" 00:18:53.224 } 00:18:53.224 } 00:18:53.224 ]' 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.224 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.482 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.482 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.482 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.482 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.482 13:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.740 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:53.740 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:18:54.673 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.673 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:54.673 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.674 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.674 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.674 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.674 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.674 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.931 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:54.931 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.931 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.931 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:54.931 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.931 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.932 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:54.932 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.932 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.932 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.932 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.932 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.932 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.496 00:18:55.496 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.496 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.496 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.754 { 00:18:55.754 "cntlid": 87, 00:18:55.754 "qid": 0, 00:18:55.754 "state": "enabled", 00:18:55.754 "thread": "nvmf_tgt_poll_group_000", 00:18:55.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:55.754 "listen_address": { 00:18:55.754 "trtype": "TCP", 00:18:55.754 "adrfam": "IPv4", 00:18:55.754 "traddr": "10.0.0.2", 00:18:55.754 "trsvcid": "4420" 00:18:55.754 }, 00:18:55.754 "peer_address": { 00:18:55.754 "trtype": "TCP", 00:18:55.754 "adrfam": "IPv4", 00:18:55.754 "traddr": "10.0.0.1", 00:18:55.754 "trsvcid": "34292" 00:18:55.754 }, 00:18:55.754 "auth": { 00:18:55.754 "state": "completed", 00:18:55.754 "digest": "sha384", 00:18:55.754 "dhgroup": "ffdhe6144" 00:18:55.754 } 00:18:55.754 } 00:18:55.754 ]' 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.754 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.011 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:56.011 13:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.942 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.199 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:57.199 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.199 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.199 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.200 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.132 00:18:58.132 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.132 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.132 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.389 { 00:18:58.389 "cntlid": 89, 00:18:58.389 "qid": 0, 00:18:58.389 "state": "enabled", 00:18:58.389 "thread": "nvmf_tgt_poll_group_000", 00:18:58.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:58.389 "listen_address": { 00:18:58.389 "trtype": "TCP", 00:18:58.389 "adrfam": "IPv4", 00:18:58.389 "traddr": "10.0.0.2", 00:18:58.389 "trsvcid": "4420" 00:18:58.389 }, 00:18:58.389 "peer_address": { 00:18:58.389 "trtype": "TCP", 00:18:58.389 "adrfam": "IPv4", 00:18:58.389 "traddr": "10.0.0.1", 00:18:58.389 "trsvcid": "34322" 00:18:58.389 }, 00:18:58.389 "auth": { 00:18:58.389 "state": "completed", 00:18:58.389 "digest": "sha384", 00:18:58.389 "dhgroup": "ffdhe8192" 00:18:58.389 } 00:18:58.389 } 00:18:58.389 ]' 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.389 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.686 13:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:58.686 13:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:18:59.648 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.648 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:59.648 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.648 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.648 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.648 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.648 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.648 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.905 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:59.905 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.905 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:59.905 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:59.905 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.905 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.905 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.906 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.906 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.906 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.906 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.906 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.906 13:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.836 00:19:00.836 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.836 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.836 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.094 { 00:19:01.094 "cntlid": 91, 00:19:01.094 "qid": 0, 00:19:01.094 "state": "enabled", 00:19:01.094 "thread": "nvmf_tgt_poll_group_000", 00:19:01.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:01.094 "listen_address": { 00:19:01.094 "trtype": "TCP", 00:19:01.094 "adrfam": "IPv4", 00:19:01.094 "traddr": "10.0.0.2", 00:19:01.094 "trsvcid": "4420" 00:19:01.094 }, 00:19:01.094 "peer_address": { 00:19:01.094 "trtype": "TCP", 00:19:01.094 "adrfam": "IPv4", 00:19:01.094 "traddr": "10.0.0.1", 00:19:01.094 "trsvcid": "35852" 00:19:01.094 }, 00:19:01.094 "auth": { 00:19:01.094 "state": "completed", 00:19:01.094 "digest": "sha384", 00:19:01.094 "dhgroup": "ffdhe8192" 00:19:01.094 } 00:19:01.094 } 00:19:01.094 ]' 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.094 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.352 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.352 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.352 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.610 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:01.610 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:02.543 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.543 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.543 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.543 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.543 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.543 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.543 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.543 13:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.800 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.734 00:19:03.734 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.734 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.734 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.991 { 00:19:03.991 "cntlid": 93, 00:19:03.991 "qid": 0, 00:19:03.991 "state": "enabled", 00:19:03.991 "thread": "nvmf_tgt_poll_group_000", 00:19:03.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:03.991 "listen_address": { 00:19:03.991 "trtype": "TCP", 00:19:03.991 "adrfam": "IPv4", 00:19:03.991 "traddr": "10.0.0.2", 00:19:03.991 "trsvcid": "4420" 00:19:03.991 }, 00:19:03.991 "peer_address": { 00:19:03.991 "trtype": "TCP", 00:19:03.991 "adrfam": "IPv4", 00:19:03.991 "traddr": "10.0.0.1", 00:19:03.991 "trsvcid": "35864" 00:19:03.991 }, 00:19:03.991 "auth": { 00:19:03.991 "state": "completed", 00:19:03.991 "digest": "sha384", 00:19:03.991 "dhgroup": "ffdhe8192" 00:19:03.991 } 00:19:03.991 } 00:19:03.991 ]' 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.991 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.248 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:04.248 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:05.180 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.180 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.180 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.180 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.180 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.180 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.180 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.180 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.438 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:06.371 00:19:06.371 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.371 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.371 13:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.628 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.628 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.628 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.628 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.629 { 00:19:06.629 "cntlid": 95, 00:19:06.629 "qid": 0, 00:19:06.629 "state": "enabled", 00:19:06.629 "thread": "nvmf_tgt_poll_group_000", 00:19:06.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:06.629 "listen_address": { 00:19:06.629 "trtype": "TCP", 00:19:06.629 "adrfam": "IPv4", 00:19:06.629 "traddr": "10.0.0.2", 00:19:06.629 "trsvcid": "4420" 00:19:06.629 }, 00:19:06.629 "peer_address": { 00:19:06.629 "trtype": "TCP", 00:19:06.629 "adrfam": "IPv4", 00:19:06.629 "traddr": "10.0.0.1", 00:19:06.629 "trsvcid": "35886" 00:19:06.629 }, 00:19:06.629 "auth": { 00:19:06.629 "state": "completed", 00:19:06.629 "digest": "sha384", 00:19:06.629 "dhgroup": "ffdhe8192" 00:19:06.629 } 00:19:06.629 } 00:19:06.629 ]' 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.629 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.887 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:06.887 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.820 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.386 13:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.644 00:19:08.644 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.644 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.644 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.902 { 00:19:08.902 "cntlid": 97, 00:19:08.902 "qid": 0, 00:19:08.902 "state": "enabled", 00:19:08.902 "thread": "nvmf_tgt_poll_group_000", 00:19:08.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:08.902 "listen_address": { 00:19:08.902 "trtype": "TCP", 00:19:08.902 "adrfam": "IPv4", 00:19:08.902 "traddr": "10.0.0.2", 00:19:08.902 "trsvcid": "4420" 00:19:08.902 }, 00:19:08.902 "peer_address": { 00:19:08.902 "trtype": "TCP", 00:19:08.902 "adrfam": "IPv4", 00:19:08.902 "traddr": "10.0.0.1", 00:19:08.902 "trsvcid": "56774" 00:19:08.902 }, 00:19:08.902 "auth": { 00:19:08.902 "state": "completed", 00:19:08.902 "digest": "sha512", 00:19:08.902 "dhgroup": "null" 00:19:08.902 } 00:19:08.902 } 00:19:08.902 ]' 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.902 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.468 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:09.468 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:10.401 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.401 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:10.401 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.401 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.401 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.401 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.401 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.401 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.401 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.966 00:19:10.966 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.966 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.966 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.224 { 00:19:11.224 "cntlid": 99, 00:19:11.224 "qid": 0, 00:19:11.224 "state": "enabled", 00:19:11.224 "thread": "nvmf_tgt_poll_group_000", 00:19:11.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:11.224 "listen_address": { 00:19:11.224 "trtype": "TCP", 00:19:11.224 "adrfam": "IPv4", 00:19:11.224 "traddr": "10.0.0.2", 00:19:11.224 "trsvcid": "4420" 00:19:11.224 }, 00:19:11.224 "peer_address": { 00:19:11.224 "trtype": "TCP", 00:19:11.224 "adrfam": "IPv4", 00:19:11.224 "traddr": "10.0.0.1", 00:19:11.224 "trsvcid": "56796" 00:19:11.224 }, 00:19:11.224 "auth": { 00:19:11.224 "state": "completed", 00:19:11.224 "digest": "sha512", 00:19:11.224 "dhgroup": "null" 00:19:11.224 } 00:19:11.224 } 00:19:11.224 ]' 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.224 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.225 13:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.483 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:11.483 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:12.414 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.414 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.414 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.414 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.414 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.414 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.414 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.414 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.672 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.930 00:19:13.187 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.187 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.187 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.445 { 00:19:13.445 "cntlid": 101, 00:19:13.445 "qid": 0, 00:19:13.445 "state": "enabled", 00:19:13.445 "thread": "nvmf_tgt_poll_group_000", 00:19:13.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:13.445 "listen_address": { 00:19:13.445 "trtype": "TCP", 00:19:13.445 "adrfam": "IPv4", 00:19:13.445 "traddr": "10.0.0.2", 00:19:13.445 "trsvcid": "4420" 00:19:13.445 }, 00:19:13.445 "peer_address": { 00:19:13.445 "trtype": "TCP", 00:19:13.445 "adrfam": "IPv4", 00:19:13.445 "traddr": "10.0.0.1", 00:19:13.445 "trsvcid": "56834" 00:19:13.445 }, 00:19:13.445 "auth": { 00:19:13.445 "state": "completed", 00:19:13.445 "digest": "sha512", 00:19:13.445 "dhgroup": "null" 00:19:13.445 } 00:19:13.445 } 00:19:13.445 ]' 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.445 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.703 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:13.703 13:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:14.636 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.636 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:14.636 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.636 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.636 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.637 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.637 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.637 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.895 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.152 00:19:15.152 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.152 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.153 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.411 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.411 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.411 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.411 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.669 { 00:19:15.669 "cntlid": 103, 00:19:15.669 "qid": 0, 00:19:15.669 "state": "enabled", 00:19:15.669 "thread": "nvmf_tgt_poll_group_000", 00:19:15.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:15.669 "listen_address": { 00:19:15.669 "trtype": "TCP", 00:19:15.669 "adrfam": "IPv4", 00:19:15.669 "traddr": "10.0.0.2", 00:19:15.669 "trsvcid": "4420" 00:19:15.669 }, 00:19:15.669 "peer_address": { 00:19:15.669 "trtype": "TCP", 00:19:15.669 "adrfam": "IPv4", 00:19:15.669 "traddr": "10.0.0.1", 00:19:15.669 "trsvcid": "56860" 00:19:15.669 }, 00:19:15.669 "auth": { 00:19:15.669 "state": "completed", 00:19:15.669 "digest": "sha512", 00:19:15.669 "dhgroup": "null" 00:19:15.669 } 00:19:15.669 } 00:19:15.669 ]' 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.669 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.927 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:15.927 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.860 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.118 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.376 00:19:17.376 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.376 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.376 13:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.634 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.634 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.634 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.634 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.634 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.634 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.634 { 00:19:17.634 "cntlid": 105, 00:19:17.634 "qid": 0, 00:19:17.634 "state": "enabled", 00:19:17.634 "thread": "nvmf_tgt_poll_group_000", 00:19:17.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:17.634 "listen_address": { 00:19:17.634 "trtype": "TCP", 00:19:17.634 "adrfam": "IPv4", 00:19:17.634 "traddr": "10.0.0.2", 00:19:17.634 "trsvcid": "4420" 00:19:17.634 }, 00:19:17.634 "peer_address": { 00:19:17.634 "trtype": "TCP", 00:19:17.634 "adrfam": "IPv4", 00:19:17.634 "traddr": "10.0.0.1", 00:19:17.634 "trsvcid": "56874" 00:19:17.634 }, 00:19:17.634 "auth": { 00:19:17.634 "state": "completed", 00:19:17.634 "digest": "sha512", 00:19:17.634 "dhgroup": "ffdhe2048" 00:19:17.634 } 00:19:17.634 } 00:19:17.634 ]' 00:19:17.634 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.893 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.893 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.893 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.893 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.893 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.893 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.893 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.151 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:18.151 13:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:19.188 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.188 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:19.188 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.188 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.188 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.188 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.188 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.188 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.447 13:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.706 00:19:19.706 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.706 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.706 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.964 { 00:19:19.964 "cntlid": 107, 00:19:19.964 "qid": 0, 00:19:19.964 "state": "enabled", 00:19:19.964 "thread": "nvmf_tgt_poll_group_000", 00:19:19.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:19.964 "listen_address": { 00:19:19.964 "trtype": "TCP", 00:19:19.964 "adrfam": "IPv4", 00:19:19.964 "traddr": "10.0.0.2", 00:19:19.964 "trsvcid": "4420" 00:19:19.964 }, 00:19:19.964 "peer_address": { 00:19:19.964 "trtype": "TCP", 00:19:19.964 "adrfam": "IPv4", 00:19:19.964 "traddr": "10.0.0.1", 00:19:19.964 "trsvcid": "51858" 00:19:19.964 }, 00:19:19.964 "auth": { 00:19:19.964 "state": "completed", 00:19:19.964 "digest": "sha512", 00:19:19.964 "dhgroup": "ffdhe2048" 00:19:19.964 } 00:19:19.964 } 00:19:19.964 ]' 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.964 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.530 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:20.530 13:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:21.096 13:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.354 13:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.354 13:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.354 13:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.354 13:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.354 13:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.354 13:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:21.354 13:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.612 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.869 00:19:21.869 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.869 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.869 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.127 { 00:19:22.127 "cntlid": 109, 00:19:22.127 "qid": 0, 00:19:22.127 "state": "enabled", 00:19:22.127 "thread": "nvmf_tgt_poll_group_000", 00:19:22.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:22.127 "listen_address": { 00:19:22.127 "trtype": "TCP", 00:19:22.127 "adrfam": "IPv4", 00:19:22.127 "traddr": "10.0.0.2", 00:19:22.127 "trsvcid": "4420" 00:19:22.127 }, 00:19:22.127 "peer_address": { 00:19:22.127 "trtype": "TCP", 00:19:22.127 "adrfam": "IPv4", 00:19:22.127 "traddr": "10.0.0.1", 00:19:22.127 "trsvcid": "51886" 00:19:22.127 }, 00:19:22.127 "auth": { 00:19:22.127 "state": "completed", 00:19:22.127 "digest": "sha512", 00:19:22.127 "dhgroup": "ffdhe2048" 00:19:22.127 } 00:19:22.127 } 00:19:22.127 ]' 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.127 13:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.385 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:22.385 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:23.324 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.324 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:23.324 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.324 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.324 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.324 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.324 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:23.324 13:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.582 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.146 00:19:24.146 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.146 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.146 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.403 { 00:19:24.403 "cntlid": 111, 00:19:24.403 "qid": 0, 00:19:24.403 "state": "enabled", 00:19:24.403 "thread": "nvmf_tgt_poll_group_000", 00:19:24.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:24.403 "listen_address": { 00:19:24.403 "trtype": "TCP", 00:19:24.403 "adrfam": "IPv4", 00:19:24.403 "traddr": "10.0.0.2", 00:19:24.403 "trsvcid": "4420" 00:19:24.403 }, 00:19:24.403 "peer_address": { 00:19:24.403 "trtype": "TCP", 00:19:24.403 "adrfam": "IPv4", 00:19:24.403 "traddr": "10.0.0.1", 00:19:24.403 "trsvcid": "51914" 00:19:24.403 }, 00:19:24.403 "auth": { 00:19:24.403 "state": "completed", 00:19:24.403 "digest": "sha512", 00:19:24.403 "dhgroup": "ffdhe2048" 00:19:24.403 } 00:19:24.403 } 00:19:24.403 ]' 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.403 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.660 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:24.661 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.592 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.849 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.414 00:19:26.414 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.414 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.414 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.672 { 00:19:26.672 "cntlid": 113, 00:19:26.672 "qid": 0, 00:19:26.672 "state": "enabled", 00:19:26.672 "thread": "nvmf_tgt_poll_group_000", 00:19:26.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:26.672 "listen_address": { 00:19:26.672 "trtype": "TCP", 00:19:26.672 "adrfam": "IPv4", 00:19:26.672 "traddr": "10.0.0.2", 00:19:26.672 "trsvcid": "4420" 00:19:26.672 }, 00:19:26.672 "peer_address": { 00:19:26.672 "trtype": "TCP", 00:19:26.672 "adrfam": "IPv4", 00:19:26.672 "traddr": "10.0.0.1", 00:19:26.672 "trsvcid": "51938" 00:19:26.672 }, 00:19:26.672 "auth": { 00:19:26.672 "state": "completed", 00:19:26.672 "digest": "sha512", 00:19:26.672 "dhgroup": "ffdhe3072" 00:19:26.672 } 00:19:26.672 } 00:19:26.672 ]' 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.672 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.931 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:26.931 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:27.864 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.864 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:27.864 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.864 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.864 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.864 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.864 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.864 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:28.122 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:28.122 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.122 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:28.122 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:28.122 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:28.122 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.122 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.123 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.123 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.123 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.123 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.123 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.123 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.688 00:19:28.688 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.688 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.688 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.688 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.688 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.688 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.688 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.946 { 00:19:28.946 "cntlid": 115, 00:19:28.946 "qid": 0, 00:19:28.946 "state": "enabled", 00:19:28.946 "thread": "nvmf_tgt_poll_group_000", 00:19:28.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:28.946 "listen_address": { 00:19:28.946 "trtype": "TCP", 00:19:28.946 "adrfam": "IPv4", 00:19:28.946 "traddr": "10.0.0.2", 00:19:28.946 "trsvcid": "4420" 00:19:28.946 }, 00:19:28.946 "peer_address": { 00:19:28.946 "trtype": "TCP", 00:19:28.946 "adrfam": "IPv4", 00:19:28.946 "traddr": "10.0.0.1", 00:19:28.946 "trsvcid": "51968" 00:19:28.946 }, 00:19:28.946 "auth": { 00:19:28.946 "state": "completed", 00:19:28.946 "digest": "sha512", 00:19:28.946 "dhgroup": "ffdhe3072" 00:19:28.946 } 00:19:28.946 } 00:19:28.946 ]' 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.946 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.204 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:29.204 13:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:30.138 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.138 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:30.138 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.138 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.138 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.138 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.138 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:30.138 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.396 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.655 00:19:30.655 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.655 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.655 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.913 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.913 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.913 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.913 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.913 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.913 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.913 { 00:19:30.913 "cntlid": 117, 00:19:30.913 "qid": 0, 00:19:30.913 "state": "enabled", 00:19:30.913 "thread": "nvmf_tgt_poll_group_000", 00:19:30.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:30.913 "listen_address": { 00:19:30.913 "trtype": "TCP", 00:19:30.913 "adrfam": "IPv4", 00:19:30.913 "traddr": "10.0.0.2", 00:19:30.913 "trsvcid": "4420" 00:19:30.913 }, 00:19:30.913 "peer_address": { 00:19:30.913 "trtype": "TCP", 00:19:30.913 "adrfam": "IPv4", 00:19:30.913 "traddr": "10.0.0.1", 00:19:30.913 "trsvcid": "43846" 00:19:30.913 }, 00:19:30.913 "auth": { 00:19:30.913 "state": "completed", 00:19:30.913 "digest": "sha512", 00:19:30.913 "dhgroup": "ffdhe3072" 00:19:30.913 } 00:19:30.913 } 00:19:30.913 ]' 00:19:30.913 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.171 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.171 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.171 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.171 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.171 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.171 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.171 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.429 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:31.429 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:32.361 13:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.361 13:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.361 13:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.361 13:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.361 13:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.361 13:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.361 13:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:32.361 13:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.618 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.875 00:19:32.875 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.875 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.875 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.132 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.132 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.132 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.132 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.132 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.132 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.132 { 00:19:33.132 "cntlid": 119, 00:19:33.132 "qid": 0, 00:19:33.132 "state": "enabled", 00:19:33.132 "thread": "nvmf_tgt_poll_group_000", 00:19:33.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:33.132 "listen_address": { 00:19:33.132 "trtype": "TCP", 00:19:33.132 "adrfam": "IPv4", 00:19:33.132 "traddr": "10.0.0.2", 00:19:33.132 "trsvcid": "4420" 00:19:33.132 }, 00:19:33.132 "peer_address": { 00:19:33.132 "trtype": "TCP", 00:19:33.132 "adrfam": "IPv4", 00:19:33.132 "traddr": "10.0.0.1", 00:19:33.132 "trsvcid": "43864" 00:19:33.132 }, 00:19:33.132 "auth": { 00:19:33.132 "state": "completed", 00:19:33.132 "digest": "sha512", 00:19:33.132 "dhgroup": "ffdhe3072" 00:19:33.132 } 00:19:33.132 } 00:19:33.132 ]' 00:19:33.132 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.132 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.133 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.390 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:33.390 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.390 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.390 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.390 13:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.648 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:33.648 13:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.581 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.838 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.403 00:19:35.404 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.404 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.404 13:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.662 { 00:19:35.662 "cntlid": 121, 00:19:35.662 "qid": 0, 00:19:35.662 "state": "enabled", 00:19:35.662 "thread": "nvmf_tgt_poll_group_000", 00:19:35.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:35.662 "listen_address": { 00:19:35.662 "trtype": "TCP", 00:19:35.662 "adrfam": "IPv4", 00:19:35.662 "traddr": "10.0.0.2", 00:19:35.662 "trsvcid": "4420" 00:19:35.662 }, 00:19:35.662 "peer_address": { 00:19:35.662 "trtype": "TCP", 00:19:35.662 "adrfam": "IPv4", 00:19:35.662 "traddr": "10.0.0.1", 00:19:35.662 "trsvcid": "43894" 00:19:35.662 }, 00:19:35.662 "auth": { 00:19:35.662 "state": "completed", 00:19:35.662 "digest": "sha512", 00:19:35.662 "dhgroup": "ffdhe4096" 00:19:35.662 } 00:19:35.662 } 00:19:35.662 ]' 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.662 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.919 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:35.919 13:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:36.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:36.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.896 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.153 13:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.411 00:19:37.411 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.411 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.411 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.669 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.928 { 00:19:37.928 "cntlid": 123, 00:19:37.928 "qid": 0, 00:19:37.928 "state": "enabled", 00:19:37.928 "thread": "nvmf_tgt_poll_group_000", 00:19:37.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:37.928 "listen_address": { 00:19:37.928 "trtype": "TCP", 00:19:37.928 "adrfam": "IPv4", 00:19:37.928 "traddr": "10.0.0.2", 00:19:37.928 "trsvcid": "4420" 00:19:37.928 }, 00:19:37.928 "peer_address": { 00:19:37.928 "trtype": "TCP", 00:19:37.928 "adrfam": "IPv4", 00:19:37.928 "traddr": "10.0.0.1", 00:19:37.928 "trsvcid": "43904" 00:19:37.928 }, 00:19:37.928 "auth": { 00:19:37.928 "state": "completed", 00:19:37.928 "digest": "sha512", 00:19:37.928 "dhgroup": "ffdhe4096" 00:19:37.928 } 00:19:37.928 } 00:19:37.928 ]' 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.928 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.929 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.187 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:38.187 13:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:39.209 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.209 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:39.209 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.209 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.209 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.209 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.209 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.209 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.467 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.725 00:19:39.725 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.725 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.725 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.983 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.983 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.983 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.983 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.983 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.983 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.983 { 00:19:39.983 "cntlid": 125, 00:19:39.983 "qid": 0, 00:19:39.983 "state": "enabled", 00:19:39.983 "thread": "nvmf_tgt_poll_group_000", 00:19:39.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:39.983 "listen_address": { 00:19:39.983 "trtype": "TCP", 00:19:39.983 "adrfam": "IPv4", 00:19:39.983 "traddr": "10.0.0.2", 00:19:39.983 "trsvcid": "4420" 00:19:39.983 }, 00:19:39.983 "peer_address": { 00:19:39.983 "trtype": "TCP", 00:19:39.983 "adrfam": "IPv4", 00:19:39.983 "traddr": "10.0.0.1", 00:19:39.983 "trsvcid": "35350" 00:19:39.983 }, 00:19:39.983 "auth": { 00:19:39.983 "state": "completed", 00:19:39.983 "digest": "sha512", 00:19:39.983 "dhgroup": "ffdhe4096" 00:19:39.983 } 00:19:39.983 } 00:19:39.983 ]' 00:19:39.983 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.241 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.241 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.241 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:40.241 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.241 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.241 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.241 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.500 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:40.500 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:41.433 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.433 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:41.433 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.433 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.433 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.433 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.433 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.433 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.691 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.951 00:19:41.951 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.951 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.951 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.209 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.209 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.209 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.209 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.468 { 00:19:42.468 "cntlid": 127, 00:19:42.468 "qid": 0, 00:19:42.468 "state": "enabled", 00:19:42.468 "thread": "nvmf_tgt_poll_group_000", 00:19:42.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:42.468 "listen_address": { 00:19:42.468 "trtype": "TCP", 00:19:42.468 "adrfam": "IPv4", 00:19:42.468 "traddr": "10.0.0.2", 00:19:42.468 "trsvcid": "4420" 00:19:42.468 }, 00:19:42.468 "peer_address": { 00:19:42.468 "trtype": "TCP", 00:19:42.468 "adrfam": "IPv4", 00:19:42.468 "traddr": "10.0.0.1", 00:19:42.468 "trsvcid": "35372" 00:19:42.468 }, 00:19:42.468 "auth": { 00:19:42.468 "state": "completed", 00:19:42.468 "digest": "sha512", 00:19:42.468 "dhgroup": "ffdhe4096" 00:19:42.468 } 00:19:42.468 } 00:19:42.468 ]' 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.468 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.726 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:42.726 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:43.660 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.918 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.919 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.484 00:19:44.484 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.484 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.484 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.742 { 00:19:44.742 "cntlid": 129, 00:19:44.742 "qid": 0, 00:19:44.742 "state": "enabled", 00:19:44.742 "thread": "nvmf_tgt_poll_group_000", 00:19:44.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:44.742 "listen_address": { 00:19:44.742 "trtype": "TCP", 00:19:44.742 "adrfam": "IPv4", 00:19:44.742 "traddr": "10.0.0.2", 00:19:44.742 "trsvcid": "4420" 00:19:44.742 }, 00:19:44.742 "peer_address": { 00:19:44.742 "trtype": "TCP", 00:19:44.742 "adrfam": "IPv4", 00:19:44.742 "traddr": "10.0.0.1", 00:19:44.742 "trsvcid": "35386" 00:19:44.742 }, 00:19:44.742 "auth": { 00:19:44.742 "state": "completed", 00:19:44.742 "digest": "sha512", 00:19:44.742 "dhgroup": "ffdhe6144" 00:19:44.742 } 00:19:44.742 } 00:19:44.742 ]' 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.742 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.000 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:45.000 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:45.934 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.934 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:45.934 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.934 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.934 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.934 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.934 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:45.934 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.191 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.757 00:19:46.757 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.757 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.757 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.014 { 00:19:47.014 "cntlid": 131, 00:19:47.014 "qid": 0, 00:19:47.014 "state": "enabled", 00:19:47.014 "thread": "nvmf_tgt_poll_group_000", 00:19:47.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:47.014 "listen_address": { 00:19:47.014 "trtype": "TCP", 00:19:47.014 "adrfam": "IPv4", 00:19:47.014 "traddr": "10.0.0.2", 00:19:47.014 "trsvcid": "4420" 00:19:47.014 }, 00:19:47.014 "peer_address": { 00:19:47.014 "trtype": "TCP", 00:19:47.014 "adrfam": "IPv4", 00:19:47.014 "traddr": "10.0.0.1", 00:19:47.014 "trsvcid": "35404" 00:19:47.014 }, 00:19:47.014 "auth": { 00:19:47.014 "state": "completed", 00:19:47.014 "digest": "sha512", 00:19:47.014 "dhgroup": "ffdhe6144" 00:19:47.014 } 00:19:47.014 } 00:19:47.014 ]' 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.014 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.272 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.272 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.272 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.272 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.272 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.530 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:47.530 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:48.463 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.463 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:48.463 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.463 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.463 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.463 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.463 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.463 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.721 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:48.721 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.721 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:48.721 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:48.721 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.721 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.722 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.722 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.722 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.722 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.722 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.722 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.722 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.287 00:19:49.287 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.287 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.287 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.545 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.545 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.545 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.545 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.545 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.545 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.545 { 00:19:49.545 "cntlid": 133, 00:19:49.545 "qid": 0, 00:19:49.545 "state": "enabled", 00:19:49.545 "thread": "nvmf_tgt_poll_group_000", 00:19:49.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:49.546 "listen_address": { 00:19:49.546 "trtype": "TCP", 00:19:49.546 "adrfam": "IPv4", 00:19:49.546 "traddr": "10.0.0.2", 00:19:49.546 "trsvcid": "4420" 00:19:49.546 }, 00:19:49.546 "peer_address": { 00:19:49.546 "trtype": "TCP", 00:19:49.546 "adrfam": "IPv4", 00:19:49.546 "traddr": "10.0.0.1", 00:19:49.546 "trsvcid": "52838" 00:19:49.546 }, 00:19:49.546 "auth": { 00:19:49.546 "state": "completed", 00:19:49.546 "digest": "sha512", 00:19:49.546 "dhgroup": "ffdhe6144" 00:19:49.546 } 00:19:49.546 } 00:19:49.546 ]' 00:19:49.546 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.546 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.546 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.804 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.804 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.804 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.804 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.804 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.062 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:50.062 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:19:50.996 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.996 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.996 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.996 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.996 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.996 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.996 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.996 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.254 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:51.819 00:19:51.819 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.819 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.819 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.083 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.083 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.083 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.083 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.083 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.083 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.083 { 00:19:52.083 "cntlid": 135, 00:19:52.083 "qid": 0, 00:19:52.083 "state": "enabled", 00:19:52.083 "thread": "nvmf_tgt_poll_group_000", 00:19:52.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:52.083 "listen_address": { 00:19:52.083 "trtype": "TCP", 00:19:52.083 "adrfam": "IPv4", 00:19:52.083 "traddr": "10.0.0.2", 00:19:52.083 "trsvcid": "4420" 00:19:52.083 }, 00:19:52.083 "peer_address": { 00:19:52.083 "trtype": "TCP", 00:19:52.083 "adrfam": "IPv4", 00:19:52.083 "traddr": "10.0.0.1", 00:19:52.083 "trsvcid": "52864" 00:19:52.083 }, 00:19:52.083 "auth": { 00:19:52.083 "state": "completed", 00:19:52.083 "digest": "sha512", 00:19:52.083 "dhgroup": "ffdhe6144" 00:19:52.083 } 00:19:52.083 } 00:19:52.083 ]' 00:19:52.083 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.341 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.341 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.341 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.341 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.341 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.341 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.341 13:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.599 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:52.599 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:19:53.532 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.532 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:53.533 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.533 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.533 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.533 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.533 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.533 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.533 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.790 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:53.790 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.790 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:53.790 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:53.790 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:53.791 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.791 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.791 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.791 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.791 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.791 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.791 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.791 13:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.724 00:19:54.724 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.724 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.724 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.724 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.724 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.724 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.724 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.982 { 00:19:54.982 "cntlid": 137, 00:19:54.982 "qid": 0, 00:19:54.982 "state": "enabled", 00:19:54.982 "thread": "nvmf_tgt_poll_group_000", 00:19:54.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:54.982 "listen_address": { 00:19:54.982 "trtype": "TCP", 00:19:54.982 "adrfam": "IPv4", 00:19:54.982 "traddr": "10.0.0.2", 00:19:54.982 "trsvcid": "4420" 00:19:54.982 }, 00:19:54.982 "peer_address": { 00:19:54.982 "trtype": "TCP", 00:19:54.982 "adrfam": "IPv4", 00:19:54.982 "traddr": "10.0.0.1", 00:19:54.982 "trsvcid": "52892" 00:19:54.982 }, 00:19:54.982 "auth": { 00:19:54.982 "state": "completed", 00:19:54.982 "digest": "sha512", 00:19:54.982 "dhgroup": "ffdhe8192" 00:19:54.982 } 00:19:54.982 } 00:19:54.982 ]' 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.982 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.983 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.241 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:55.241 13:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:19:56.174 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.174 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:56.175 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.175 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.175 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.175 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.175 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.175 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.434 13:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.434 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.434 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.434 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.434 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.368 00:19:57.368 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.368 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.368 13:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.626 { 00:19:57.626 "cntlid": 139, 00:19:57.626 "qid": 0, 00:19:57.626 "state": "enabled", 00:19:57.626 "thread": "nvmf_tgt_poll_group_000", 00:19:57.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:57.626 "listen_address": { 00:19:57.626 "trtype": "TCP", 00:19:57.626 "adrfam": "IPv4", 00:19:57.626 "traddr": "10.0.0.2", 00:19:57.626 "trsvcid": "4420" 00:19:57.626 }, 00:19:57.626 "peer_address": { 00:19:57.626 "trtype": "TCP", 00:19:57.626 "adrfam": "IPv4", 00:19:57.626 "traddr": "10.0.0.1", 00:19:57.626 "trsvcid": "52932" 00:19:57.626 }, 00:19:57.626 "auth": { 00:19:57.626 "state": "completed", 00:19:57.626 "digest": "sha512", 00:19:57.626 "dhgroup": "ffdhe8192" 00:19:57.626 } 00:19:57.626 } 00:19:57.626 ]' 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.626 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.885 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:57.885 13:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: --dhchap-ctrl-secret DHHC-1:02:ODkxZGE1YjE0NmY0MzRjMTBhMmNlNjc3ODNlMjJhODA4NmNiNDc0YTg5YjkxY2E2gdnm6w==: 00:19:58.901 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.901 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:58.901 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.901 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.901 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.901 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.901 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.901 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.159 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.417 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.417 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.417 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.417 13:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.350 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.350 { 00:20:00.350 "cntlid": 141, 00:20:00.350 "qid": 0, 00:20:00.350 "state": "enabled", 00:20:00.350 "thread": "nvmf_tgt_poll_group_000", 00:20:00.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:00.350 "listen_address": { 00:20:00.350 "trtype": "TCP", 00:20:00.350 "adrfam": "IPv4", 00:20:00.350 "traddr": "10.0.0.2", 00:20:00.350 "trsvcid": "4420" 00:20:00.350 }, 00:20:00.350 "peer_address": { 00:20:00.350 "trtype": "TCP", 00:20:00.350 "adrfam": "IPv4", 00:20:00.350 "traddr": "10.0.0.1", 00:20:00.350 "trsvcid": "45062" 00:20:00.350 }, 00:20:00.350 "auth": { 00:20:00.350 "state": "completed", 00:20:00.350 "digest": "sha512", 00:20:00.350 "dhgroup": "ffdhe8192" 00:20:00.350 } 00:20:00.350 } 00:20:00.350 ]' 00:20:00.350 13:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.608 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.608 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.608 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.608 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.608 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.608 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.608 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.865 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:20:00.865 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:01:N2RjNGFkYTcxNTQ3ZmYwOGZlYTBiMDI4MzY1N2FlYTJ30gjn: 00:20:01.797 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.797 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:01.797 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.797 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.797 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.797 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.797 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.797 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.054 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.988 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.988 { 00:20:02.988 "cntlid": 143, 00:20:02.988 "qid": 0, 00:20:02.988 "state": "enabled", 00:20:02.988 "thread": "nvmf_tgt_poll_group_000", 00:20:02.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:02.988 "listen_address": { 00:20:02.988 "trtype": "TCP", 00:20:02.988 "adrfam": "IPv4", 00:20:02.988 "traddr": "10.0.0.2", 00:20:02.988 "trsvcid": "4420" 00:20:02.988 }, 00:20:02.988 "peer_address": { 00:20:02.988 "trtype": "TCP", 00:20:02.988 "adrfam": "IPv4", 00:20:02.988 "traddr": "10.0.0.1", 00:20:02.988 "trsvcid": "45092" 00:20:02.988 }, 00:20:02.988 "auth": { 00:20:02.988 "state": "completed", 00:20:02.988 "digest": "sha512", 00:20:02.988 "dhgroup": "ffdhe8192" 00:20:02.988 } 00:20:02.988 } 00:20:02.988 ]' 00:20:02.988 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.245 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.245 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.245 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:03.245 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.245 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.245 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.245 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.503 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:20:03.503 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:20:04.435 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:04.436 13:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.694 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.627 00:20:05.627 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.627 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.627 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.885 { 00:20:05.885 "cntlid": 145, 00:20:05.885 "qid": 0, 00:20:05.885 "state": "enabled", 00:20:05.885 "thread": "nvmf_tgt_poll_group_000", 00:20:05.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:05.885 "listen_address": { 00:20:05.885 "trtype": "TCP", 00:20:05.885 "adrfam": "IPv4", 00:20:05.885 "traddr": "10.0.0.2", 00:20:05.885 "trsvcid": "4420" 00:20:05.885 }, 00:20:05.885 "peer_address": { 00:20:05.885 "trtype": "TCP", 00:20:05.885 "adrfam": "IPv4", 00:20:05.885 "traddr": "10.0.0.1", 00:20:05.885 "trsvcid": "45130" 00:20:05.885 }, 00:20:05.885 "auth": { 00:20:05.885 "state": "completed", 00:20:05.885 "digest": "sha512", 00:20:05.885 "dhgroup": "ffdhe8192" 00:20:05.885 } 00:20:05.885 } 00:20:05.885 ]' 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.885 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.143 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:20:06.143 13:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:NTk1NzBmOTFkZmZlZjRmNDRlZjE1YWE1NWRlYjNhYjY5MGQyMjRiYTZjZDAxMjQypdY4hA==: --dhchap-ctrl-secret DHHC-1:03:Nzg4YzRhMDFkZjZmNmUyYTJhMGQwNmNmOGIzMzhkYmM3YjU3YjNjYzk5NzNjYzMyOTgyMzhlZmU1ODM3YTg1NeCFLzY=: 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:07.075 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:08.008 request: 00:20:08.008 { 00:20:08.008 "name": "nvme0", 00:20:08.008 "trtype": "tcp", 00:20:08.008 "traddr": "10.0.0.2", 00:20:08.008 "adrfam": "ipv4", 00:20:08.008 "trsvcid": "4420", 00:20:08.008 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:08.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:08.008 "prchk_reftag": false, 00:20:08.008 "prchk_guard": false, 00:20:08.008 "hdgst": false, 00:20:08.008 "ddgst": false, 00:20:08.008 "dhchap_key": "key2", 00:20:08.008 "allow_unrecognized_csi": false, 00:20:08.008 "method": "bdev_nvme_attach_controller", 00:20:08.008 "req_id": 1 00:20:08.008 } 00:20:08.008 Got JSON-RPC error response 00:20:08.008 response: 00:20:08.008 { 00:20:08.008 "code": -5, 00:20:08.008 "message": "Input/output error" 00:20:08.008 } 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:08.008 13:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:08.940 request: 00:20:08.940 { 00:20:08.940 "name": "nvme0", 00:20:08.940 "trtype": "tcp", 00:20:08.941 "traddr": "10.0.0.2", 00:20:08.941 "adrfam": "ipv4", 00:20:08.941 "trsvcid": "4420", 00:20:08.941 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:08.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:08.941 "prchk_reftag": false, 00:20:08.941 "prchk_guard": false, 00:20:08.941 "hdgst": false, 00:20:08.941 "ddgst": false, 00:20:08.941 "dhchap_key": "key1", 00:20:08.941 "dhchap_ctrlr_key": "ckey2", 00:20:08.941 "allow_unrecognized_csi": false, 00:20:08.941 "method": "bdev_nvme_attach_controller", 00:20:08.941 "req_id": 1 00:20:08.941 } 00:20:08.941 Got JSON-RPC error response 00:20:08.941 response: 00:20:08.941 { 00:20:08.941 "code": -5, 00:20:08.941 "message": "Input/output error" 00:20:08.941 } 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.941 13:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.873 request: 00:20:09.873 { 00:20:09.873 "name": "nvme0", 00:20:09.873 "trtype": "tcp", 00:20:09.873 "traddr": "10.0.0.2", 00:20:09.873 "adrfam": "ipv4", 00:20:09.873 "trsvcid": "4420", 00:20:09.873 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:09.873 "prchk_reftag": false, 00:20:09.873 "prchk_guard": false, 00:20:09.873 "hdgst": false, 00:20:09.873 "ddgst": false, 00:20:09.873 "dhchap_key": "key1", 00:20:09.873 "dhchap_ctrlr_key": "ckey1", 00:20:09.873 "allow_unrecognized_csi": false, 00:20:09.873 "method": "bdev_nvme_attach_controller", 00:20:09.873 "req_id": 1 00:20:09.873 } 00:20:09.873 Got JSON-RPC error response 00:20:09.873 response: 00:20:09.873 { 00:20:09.873 "code": -5, 00:20:09.873 "message": "Input/output error" 00:20:09.873 } 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3160091 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3160091 ']' 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3160091 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160091 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160091' 00:20:09.873 killing process with pid 3160091 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3160091 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3160091 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3182870 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3182870 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3182870 ']' 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.873 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3182870 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3182870 ']' 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.131 13:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.388 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.388 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:10.388 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:10.388 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.388 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 null0 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PNz 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.gbZ ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gbZ 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.R7p 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.jFj ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jFj 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.PzI 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.C9t ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C9t 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.DD4 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.646 13:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.015 nvme0n1 00:20:12.015 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.015 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.015 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.272 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.272 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.272 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.272 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.272 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.272 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.272 { 00:20:12.272 "cntlid": 1, 00:20:12.272 "qid": 0, 00:20:12.272 "state": "enabled", 00:20:12.272 "thread": "nvmf_tgt_poll_group_000", 00:20:12.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:12.272 "listen_address": { 00:20:12.272 "trtype": "TCP", 00:20:12.272 "adrfam": "IPv4", 00:20:12.272 "traddr": "10.0.0.2", 00:20:12.272 "trsvcid": "4420" 00:20:12.272 }, 00:20:12.272 "peer_address": { 00:20:12.272 "trtype": "TCP", 00:20:12.272 "adrfam": "IPv4", 00:20:12.272 "traddr": "10.0.0.1", 00:20:12.273 "trsvcid": "51840" 00:20:12.273 }, 00:20:12.273 "auth": { 00:20:12.273 "state": "completed", 00:20:12.273 "digest": "sha512", 00:20:12.273 "dhgroup": "ffdhe8192" 00:20:12.273 } 00:20:12.273 } 00:20:12.273 ]' 00:20:12.273 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.273 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.273 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.530 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.530 13:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.530 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.530 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.530 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.788 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:20:12.788 13:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:13.719 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.977 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.235 request: 00:20:14.235 { 00:20:14.235 "name": "nvme0", 00:20:14.235 "trtype": "tcp", 00:20:14.235 "traddr": "10.0.0.2", 00:20:14.235 "adrfam": "ipv4", 00:20:14.235 "trsvcid": "4420", 00:20:14.235 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:14.235 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:14.235 "prchk_reftag": false, 00:20:14.235 "prchk_guard": false, 00:20:14.235 "hdgst": false, 00:20:14.235 "ddgst": false, 00:20:14.235 "dhchap_key": "key3", 00:20:14.235 "allow_unrecognized_csi": false, 00:20:14.235 "method": "bdev_nvme_attach_controller", 00:20:14.235 "req_id": 1 00:20:14.235 } 00:20:14.235 Got JSON-RPC error response 00:20:14.235 response: 00:20:14.235 { 00:20:14.235 "code": -5, 00:20:14.235 "message": "Input/output error" 00:20:14.235 } 00:20:14.235 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:14.235 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.235 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.235 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.235 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:14.235 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:14.235 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:14.235 13:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:14.493 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.494 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.752 request: 00:20:14.752 { 00:20:14.752 "name": "nvme0", 00:20:14.752 "trtype": "tcp", 00:20:14.752 "traddr": "10.0.0.2", 00:20:14.752 "adrfam": "ipv4", 00:20:14.752 "trsvcid": "4420", 00:20:14.752 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:14.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:14.752 "prchk_reftag": false, 00:20:14.752 "prchk_guard": false, 00:20:14.752 "hdgst": false, 00:20:14.752 "ddgst": false, 00:20:14.752 "dhchap_key": "key3", 00:20:14.752 "allow_unrecognized_csi": false, 00:20:14.752 "method": "bdev_nvme_attach_controller", 00:20:14.752 "req_id": 1 00:20:14.752 } 00:20:14.752 Got JSON-RPC error response 00:20:14.752 response: 00:20:14.752 { 00:20:14.752 "code": -5, 00:20:14.752 "message": "Input/output error" 00:20:14.752 } 00:20:14.752 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:14.752 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.752 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.752 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.752 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:14.752 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:14.752 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:14.753 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:14.753 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:14.753 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:15.011 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:15.577 request: 00:20:15.577 { 00:20:15.577 "name": "nvme0", 00:20:15.577 "trtype": "tcp", 00:20:15.577 "traddr": "10.0.0.2", 00:20:15.577 "adrfam": "ipv4", 00:20:15.577 "trsvcid": "4420", 00:20:15.577 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:15.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:15.577 "prchk_reftag": false, 00:20:15.577 "prchk_guard": false, 00:20:15.577 "hdgst": false, 00:20:15.577 "ddgst": false, 00:20:15.577 "dhchap_key": "key0", 00:20:15.577 "dhchap_ctrlr_key": "key1", 00:20:15.577 "allow_unrecognized_csi": false, 00:20:15.577 "method": "bdev_nvme_attach_controller", 00:20:15.577 "req_id": 1 00:20:15.577 } 00:20:15.577 Got JSON-RPC error response 00:20:15.577 response: 00:20:15.577 { 00:20:15.577 "code": -5, 00:20:15.577 "message": "Input/output error" 00:20:15.577 } 00:20:15.577 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:15.577 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.577 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.577 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.577 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:15.577 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:15.577 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:16.143 nvme0n1 00:20:16.143 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:16.143 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:16.143 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.143 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.143 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.143 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.709 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:16.709 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.709 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.709 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.709 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:16.709 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:16.709 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:18.083 nvme0n1 00:20:18.083 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:18.083 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:18.083 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.342 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.342 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:18.342 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.342 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.342 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.342 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:18.342 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.342 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:18.599 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.600 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:20:18.600 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: --dhchap-ctrl-secret DHHC-1:03:ZWMwMjFiNWUzNDZmMDQwOTY0NzIwOTQ0OGJiZmU0NjIyN2U2NWE2MmFlZmY0N2UyZmFhMDcwYmMyOWRlNzk0NF9Cy5o=: 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.587 13:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.845 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:19.845 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:19.845 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:19.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:19.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:19.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:19.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:19.846 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:20.782 request: 00:20:20.782 { 00:20:20.782 "name": "nvme0", 00:20:20.782 "trtype": "tcp", 00:20:20.782 "traddr": "10.0.0.2", 00:20:20.782 "adrfam": "ipv4", 00:20:20.782 "trsvcid": "4420", 00:20:20.782 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:20.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:20.782 "prchk_reftag": false, 00:20:20.782 "prchk_guard": false, 00:20:20.782 "hdgst": false, 00:20:20.782 "ddgst": false, 00:20:20.782 "dhchap_key": "key1", 00:20:20.782 "allow_unrecognized_csi": false, 00:20:20.782 "method": "bdev_nvme_attach_controller", 00:20:20.782 "req_id": 1 00:20:20.782 } 00:20:20.782 Got JSON-RPC error response 00:20:20.782 response: 00:20:20.782 { 00:20:20.782 "code": -5, 00:20:20.782 "message": "Input/output error" 00:20:20.782 } 00:20:20.782 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:20.782 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:20.782 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:20.782 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:20.782 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:20.782 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:20.782 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:22.156 nvme0n1 00:20:22.156 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:22.156 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.156 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:22.156 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.156 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.156 13:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.413 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:22.413 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.413 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.413 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.413 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:22.413 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:22.413 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:22.979 nvme0n1 00:20:22.979 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:22.979 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.979 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:23.236 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.236 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.236 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: '' 2s 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: ]] 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjdkOTNiMzgwMDRjNmJkYTJmYTUzODA2OTI4NjcwZWP5IcFk: 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:23.495 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: 2s 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: ]] 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzQ1M2E5ZGY2Mjc4MzRmYTdjM2MwOGYwN2MyNzI1ODI4OWJiNzI4MzliNGQ0Yzg578qRLQ==: 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:25.430 13:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:27.327 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:27.327 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:27.327 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:27.327 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:27.327 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:27.327 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:27.585 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:27.585 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.585 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:27.585 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.585 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.585 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.585 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:27.585 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:27.585 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:28.958 nvme0n1 00:20:28.958 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:28.958 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.958 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.958 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.958 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:28.958 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:29.892 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:29.892 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:29.892 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.149 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.149 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:30.149 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.149 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.149 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.149 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:30.149 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:30.407 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:30.407 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:30.407 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:30.664 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:31.229 request: 00:20:31.229 { 00:20:31.229 "name": "nvme0", 00:20:31.229 "dhchap_key": "key1", 00:20:31.229 "dhchap_ctrlr_key": "key3", 00:20:31.229 "method": "bdev_nvme_set_keys", 00:20:31.229 "req_id": 1 00:20:31.229 } 00:20:31.229 Got JSON-RPC error response 00:20:31.229 response: 00:20:31.229 { 00:20:31.229 "code": -13, 00:20:31.229 "message": "Permission denied" 00:20:31.229 } 00:20:31.486 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:31.486 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:31.486 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:31.486 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:31.486 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:31.486 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:31.486 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.753 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:31.753 13:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:32.686 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:32.686 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:32.686 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.945 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:32.945 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:32.945 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.945 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.945 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.945 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:32.945 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:32.945 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:34.319 nvme0n1 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:34.319 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:35.253 request: 00:20:35.253 { 00:20:35.253 "name": "nvme0", 00:20:35.253 "dhchap_key": "key2", 00:20:35.253 "dhchap_ctrlr_key": "key0", 00:20:35.253 "method": "bdev_nvme_set_keys", 00:20:35.253 "req_id": 1 00:20:35.253 } 00:20:35.253 Got JSON-RPC error response 00:20:35.253 response: 00:20:35.253 { 00:20:35.253 "code": -13, 00:20:35.253 "message": "Permission denied" 00:20:35.253 } 00:20:35.253 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:35.253 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:35.253 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:35.253 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:35.253 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:35.253 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:35.253 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.511 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:35.511 13:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:36.444 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:36.444 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:36.444 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3160228 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3160228 ']' 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3160228 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160228 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160228' 00:20:36.701 killing process with pid 3160228 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3160228 00:20:36.701 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3160228 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.265 rmmod nvme_tcp 00:20:37.265 rmmod nvme_fabrics 00:20:37.265 rmmod nvme_keyring 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3182870 ']' 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3182870 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3182870 ']' 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3182870 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3182870 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3182870' 00:20:37.265 killing process with pid 3182870 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3182870 00:20:37.265 13:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3182870 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.524 13:19:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.427 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.427 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.PNz /tmp/spdk.key-sha256.R7p /tmp/spdk.key-sha384.PzI /tmp/spdk.key-sha512.DD4 /tmp/spdk.key-sha512.gbZ /tmp/spdk.key-sha384.jFj /tmp/spdk.key-sha256.C9t '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:39.427 00:20:39.427 real 3m29.955s 00:20:39.427 user 8m12.834s 00:20:39.427 sys 0m27.541s 00:20:39.427 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.427 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.427 ************************************ 00:20:39.427 END TEST nvmf_auth_target 00:20:39.427 ************************************ 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.686 ************************************ 00:20:39.686 START TEST nvmf_bdevio_no_huge 00:20:39.686 ************************************ 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:39.686 * Looking for test storage... 00:20:39.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.686 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:39.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.687 --rc genhtml_branch_coverage=1 00:20:39.687 --rc genhtml_function_coverage=1 00:20:39.687 --rc genhtml_legend=1 00:20:39.687 --rc geninfo_all_blocks=1 00:20:39.687 --rc geninfo_unexecuted_blocks=1 00:20:39.687 00:20:39.687 ' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:39.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.687 --rc genhtml_branch_coverage=1 00:20:39.687 --rc genhtml_function_coverage=1 00:20:39.687 --rc genhtml_legend=1 00:20:39.687 --rc geninfo_all_blocks=1 00:20:39.687 --rc geninfo_unexecuted_blocks=1 00:20:39.687 00:20:39.687 ' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:39.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.687 --rc genhtml_branch_coverage=1 00:20:39.687 --rc genhtml_function_coverage=1 00:20:39.687 --rc genhtml_legend=1 00:20:39.687 --rc geninfo_all_blocks=1 00:20:39.687 --rc geninfo_unexecuted_blocks=1 00:20:39.687 00:20:39.687 ' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:39.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.687 --rc genhtml_branch_coverage=1 00:20:39.687 --rc genhtml_function_coverage=1 00:20:39.687 --rc genhtml_legend=1 00:20:39.687 --rc geninfo_all_blocks=1 00:20:39.687 --rc geninfo_unexecuted_blocks=1 00:20:39.687 00:20:39.687 ' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.687 13:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.225 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:42.226 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:42.226 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:42.226 Found net devices under 0000:09:00.0: cvl_0_0 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:42.226 Found net devices under 0000:09:00.1: cvl_0_1 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:42.226 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:42.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:20:42.227 00:20:42.227 --- 10.0.0.2 ping statistics --- 00:20:42.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.227 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:20:42.227 00:20:42.227 --- 10.0.0.1 ping statistics --- 00:20:42.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.227 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3188152 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3188152 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3188152 ']' 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.227 [2024-11-25 13:19:39.596045] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:20:42.227 [2024-11-25 13:19:39.596136] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:42.227 [2024-11-25 13:19:39.673913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.227 [2024-11-25 13:19:39.734986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.227 [2024-11-25 13:19:39.735043] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.227 [2024-11-25 13:19:39.735057] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.227 [2024-11-25 13:19:39.735072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.227 [2024-11-25 13:19:39.735081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.227 [2024-11-25 13:19:39.736246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:42.227 [2024-11-25 13:19:39.736319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:42.227 [2024-11-25 13:19:39.736443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:42.227 [2024-11-25 13:19:39.736447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.227 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.563 [2024-11-25 13:19:39.893269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.563 Malloc0 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.563 [2024-11-25 13:19:39.931648] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:42.563 { 00:20:42.563 "params": { 00:20:42.563 "name": "Nvme$subsystem", 00:20:42.563 "trtype": "$TEST_TRANSPORT", 00:20:42.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:42.563 "adrfam": "ipv4", 00:20:42.563 "trsvcid": "$NVMF_PORT", 00:20:42.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:42.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:42.563 "hdgst": ${hdgst:-false}, 00:20:42.563 "ddgst": ${ddgst:-false} 00:20:42.563 }, 00:20:42.563 "method": "bdev_nvme_attach_controller" 00:20:42.563 } 00:20:42.563 EOF 00:20:42.563 )") 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:42.563 13:19:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:42.563 "params": { 00:20:42.563 "name": "Nvme1", 00:20:42.563 "trtype": "tcp", 00:20:42.563 "traddr": "10.0.0.2", 00:20:42.563 "adrfam": "ipv4", 00:20:42.563 "trsvcid": "4420", 00:20:42.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.563 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:42.563 "hdgst": false, 00:20:42.563 "ddgst": false 00:20:42.563 }, 00:20:42.563 "method": "bdev_nvme_attach_controller" 00:20:42.563 }' 00:20:42.563 [2024-11-25 13:19:39.982929] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:20:42.564 [2024-11-25 13:19:39.983017] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3188292 ] 00:20:42.564 [2024-11-25 13:19:40.060280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:42.564 [2024-11-25 13:19:40.127065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.564 [2024-11-25 13:19:40.127120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.564 [2024-11-25 13:19:40.127124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.821 I/O targets: 00:20:42.821 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:42.821 00:20:42.821 00:20:42.821 CUnit - A unit testing framework for C - Version 2.1-3 00:20:42.821 http://cunit.sourceforge.net/ 00:20:42.821 00:20:42.821 00:20:42.821 Suite: bdevio tests on: Nvme1n1 00:20:43.079 Test: blockdev write read block ...passed 00:20:43.079 Test: blockdev write zeroes read block ...passed 00:20:43.079 Test: blockdev write zeroes read no split ...passed 00:20:43.079 Test: blockdev write zeroes read split ...passed 00:20:43.079 Test: blockdev write zeroes read split partial ...passed 00:20:43.079 Test: blockdev reset ...[2024-11-25 13:19:40.598718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:43.080 [2024-11-25 13:19:40.598830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e1720 (9): Bad file descriptor 00:20:43.080 [2024-11-25 13:19:40.615857] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:43.080 passed 00:20:43.080 Test: blockdev write read 8 blocks ...passed 00:20:43.080 Test: blockdev write read size > 128k ...passed 00:20:43.080 Test: blockdev write read invalid size ...passed 00:20:43.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:43.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:43.080 Test: blockdev write read max offset ...passed 00:20:43.337 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:43.337 Test: blockdev writev readv 8 blocks ...passed 00:20:43.337 Test: blockdev writev readv 30 x 1block ...passed 00:20:43.338 Test: blockdev writev readv block ...passed 00:20:43.338 Test: blockdev writev readv size > 128k ...passed 00:20:43.338 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:43.338 Test: blockdev comparev and writev ...[2024-11-25 13:19:40.831605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:43.338 [2024-11-25 13:19:40.831647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.831672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:43.338 [2024-11-25 13:19:40.831690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.832024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:43.338 [2024-11-25 13:19:40.832047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.832069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:43.338 [2024-11-25 13:19:40.832085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.832393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:43.338 [2024-11-25 13:19:40.832417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.832437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:43.338 [2024-11-25 13:19:40.832454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.832756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:43.338 [2024-11-25 13:19:40.832779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.832800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:43.338 [2024-11-25 13:19:40.832816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:43.338 passed 00:20:43.338 Test: blockdev nvme passthru rw ...passed 00:20:43.338 Test: blockdev nvme passthru vendor specific ...[2024-11-25 13:19:40.915543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:43.338 [2024-11-25 13:19:40.915571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.915718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:43.338 [2024-11-25 13:19:40.915741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.915885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:43.338 [2024-11-25 13:19:40.915909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:43.338 [2024-11-25 13:19:40.916049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:43.338 [2024-11-25 13:19:40.916074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:43.338 passed 00:20:43.338 Test: blockdev nvme admin passthru ...passed 00:20:43.338 Test: blockdev copy ...passed 00:20:43.338 00:20:43.338 Run Summary: Type Total Ran Passed Failed Inactive 00:20:43.338 suites 1 1 n/a 0 0 00:20:43.338 tests 23 23 23 0 0 00:20:43.338 asserts 152 152 152 0 n/a 00:20:43.338 00:20:43.338 Elapsed time = 1.065 seconds 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.903 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.904 rmmod nvme_tcp 00:20:43.904 rmmod nvme_fabrics 00:20:43.904 rmmod nvme_keyring 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3188152 ']' 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3188152 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3188152 ']' 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3188152 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3188152 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3188152' 00:20:43.904 killing process with pid 3188152 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3188152 00:20:43.904 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3188152 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.470 13:19:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.377 00:20:46.377 real 0m6.763s 00:20:46.377 user 0m11.252s 00:20:46.377 sys 0m2.682s 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:46.377 ************************************ 00:20:46.377 END TEST nvmf_bdevio_no_huge 00:20:46.377 ************************************ 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.377 ************************************ 00:20:46.377 START TEST nvmf_tls 00:20:46.377 ************************************ 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:46.377 * Looking for test storage... 00:20:46.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:46.377 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.638 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:46.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.639 --rc genhtml_branch_coverage=1 00:20:46.639 --rc genhtml_function_coverage=1 00:20:46.639 --rc genhtml_legend=1 00:20:46.639 --rc geninfo_all_blocks=1 00:20:46.639 --rc geninfo_unexecuted_blocks=1 00:20:46.639 00:20:46.639 ' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.639 --rc genhtml_branch_coverage=1 00:20:46.639 --rc genhtml_function_coverage=1 00:20:46.639 --rc genhtml_legend=1 00:20:46.639 --rc geninfo_all_blocks=1 00:20:46.639 --rc geninfo_unexecuted_blocks=1 00:20:46.639 00:20:46.639 ' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.639 --rc genhtml_branch_coverage=1 00:20:46.639 --rc genhtml_function_coverage=1 00:20:46.639 --rc genhtml_legend=1 00:20:46.639 --rc geninfo_all_blocks=1 00:20:46.639 --rc geninfo_unexecuted_blocks=1 00:20:46.639 00:20:46.639 ' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.639 --rc genhtml_branch_coverage=1 00:20:46.639 --rc genhtml_function_coverage=1 00:20:46.639 --rc genhtml_legend=1 00:20:46.639 --rc geninfo_all_blocks=1 00:20:46.639 --rc geninfo_unexecuted_blocks=1 00:20:46.639 00:20:46.639 ' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:46.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:46.639 13:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.174 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:49.175 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:49.175 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:49.175 Found net devices under 0000:09:00.0: cvl_0_0 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:49.175 Found net devices under 0000:09:00.1: cvl_0_1 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:49.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:20:49.175 00:20:49.175 --- 10.0.0.2 ping statistics --- 00:20:49.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.175 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:20:49.175 00:20:49.175 --- 10.0.0.1 ping statistics --- 00:20:49.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.175 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3190386 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3190386 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3190386 ']' 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.175 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.175 [2024-11-25 13:19:46.487506] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:20:49.175 [2024-11-25 13:19:46.487585] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.175 [2024-11-25 13:19:46.562206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.176 [2024-11-25 13:19:46.621081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.176 [2024-11-25 13:19:46.621152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.176 [2024-11-25 13:19:46.621164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.176 [2024-11-25 13:19:46.621175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.176 [2024-11-25 13:19:46.621184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.176 [2024-11-25 13:19:46.621834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.176 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.176 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:49.176 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.176 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.176 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.176 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.176 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:49.176 13:19:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:49.434 true 00:20:49.434 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:49.434 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:49.691 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:49.691 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:49.691 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:49.949 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:49.949 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:50.207 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:50.207 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:50.207 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:50.773 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:50.773 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:50.773 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:50.773 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:50.773 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:50.773 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:51.031 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:51.031 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:51.031 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:51.597 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:51.597 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:51.597 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:51.597 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:51.597 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:52.162 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:52.163 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:52.163 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.VuE6m34ny2 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.23ydTxKtTj 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.VuE6m34ny2 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.23ydTxKtTj 00:20:52.421 13:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:52.679 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:53.245 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.VuE6m34ny2 00:20:53.245 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VuE6m34ny2 00:20:53.245 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:53.503 [2024-11-25 13:19:50.922868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.503 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.761 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:54.019 [2024-11-25 13:19:51.512496] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:54.019 [2024-11-25 13:19:51.512736] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.019 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:54.277 malloc0 00:20:54.277 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.535 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VuE6m34ny2 00:20:54.792 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.050 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.VuE6m34ny2 00:21:07.250 Initializing NVMe Controllers 00:21:07.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:07.250 Initialization complete. Launching workers. 00:21:07.250 ======================================================== 00:21:07.250 Latency(us) 00:21:07.250 Device Information : IOPS MiB/s Average min max 00:21:07.251 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8730.06 34.10 7332.95 1128.57 8785.03 00:21:07.251 ======================================================== 00:21:07.251 Total : 8730.06 34.10 7332.95 1128.57 8785.03 00:21:07.251 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VuE6m34ny2 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VuE6m34ny2 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3192405 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3192405 /var/tmp/bdevperf.sock 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3192405 ']' 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.251 13:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.251 [2024-11-25 13:20:02.772002] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:07.251 [2024-11-25 13:20:02.772098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192405 ] 00:21:07.251 [2024-11-25 13:20:02.837362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.251 [2024-11-25 13:20:02.894777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.251 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.251 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:07.251 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VuE6m34ny2 00:21:07.251 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:07.251 [2024-11-25 13:20:03.521241] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.251 TLSTESTn1 00:21:07.251 13:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:07.251 Running I/O for 10 seconds... 00:21:08.187 3635.00 IOPS, 14.20 MiB/s [2024-11-25T12:20:06.782Z] 3606.50 IOPS, 14.09 MiB/s [2024-11-25T12:20:08.158Z] 3619.33 IOPS, 14.14 MiB/s [2024-11-25T12:20:09.093Z] 3622.25 IOPS, 14.15 MiB/s [2024-11-25T12:20:10.077Z] 3622.40 IOPS, 14.15 MiB/s [2024-11-25T12:20:11.033Z] 3626.83 IOPS, 14.17 MiB/s [2024-11-25T12:20:11.968Z] 3620.86 IOPS, 14.14 MiB/s [2024-11-25T12:20:12.902Z] 3617.12 IOPS, 14.13 MiB/s [2024-11-25T12:20:13.837Z] 3623.22 IOPS, 14.15 MiB/s [2024-11-25T12:20:13.837Z] 3631.60 IOPS, 14.19 MiB/s 00:21:16.178 Latency(us) 00:21:16.178 [2024-11-25T12:20:13.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.178 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.178 Verification LBA range: start 0x0 length 0x2000 00:21:16.178 TLSTESTn1 : 10.02 3636.64 14.21 0.00 0.00 35134.63 8446.86 31263.10 00:21:16.178 [2024-11-25T12:20:13.837Z] =================================================================================================================== 00:21:16.178 [2024-11-25T12:20:13.837Z] Total : 3636.64 14.21 0.00 0.00 35134.63 8446.86 31263.10 00:21:16.178 { 00:21:16.178 "results": [ 00:21:16.178 { 00:21:16.178 "job": "TLSTESTn1", 00:21:16.178 "core_mask": "0x4", 00:21:16.178 "workload": "verify", 00:21:16.178 "status": "finished", 00:21:16.178 "verify_range": { 00:21:16.178 "start": 0, 00:21:16.178 "length": 8192 00:21:16.178 }, 00:21:16.178 "queue_depth": 128, 00:21:16.178 "io_size": 4096, 00:21:16.178 "runtime": 10.020793, 00:21:16.178 "iops": 3636.638337903996, 00:21:16.178 "mibps": 14.205618507437485, 00:21:16.178 "io_failed": 0, 00:21:16.178 "io_timeout": 0, 00:21:16.178 "avg_latency_us": 35134.62839840884, 00:21:16.178 "min_latency_us": 8446.862222222222, 00:21:16.178 "max_latency_us": 31263.09925925926 00:21:16.178 } 00:21:16.178 ], 00:21:16.178 "core_count": 1 00:21:16.178 } 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3192405 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3192405 ']' 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3192405 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3192405 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3192405' 00:21:16.178 killing process with pid 3192405 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3192405 00:21:16.178 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.178 00:21:16.178 Latency(us) 00:21:16.178 [2024-11-25T12:20:13.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.178 [2024-11-25T12:20:13.837Z] =================================================================================================================== 00:21:16.178 [2024-11-25T12:20:13.837Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.178 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3192405 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.23ydTxKtTj 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.23ydTxKtTj 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.23ydTxKtTj 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.23ydTxKtTj 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3193728 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3193728 /var/tmp/bdevperf.sock 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3193728 ']' 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.437 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.437 [2024-11-25 13:20:14.087348] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:16.437 [2024-11-25 13:20:14.087449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193728 ] 00:21:16.696 [2024-11-25 13:20:14.155167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.696 [2024-11-25 13:20:14.211884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.696 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.696 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.696 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.23ydTxKtTj 00:21:16.954 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:17.212 [2024-11-25 13:20:14.858243] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.212 [2024-11-25 13:20:14.869979] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:17.212 [2024-11-25 13:20:14.870356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1420310 (107): Transport endpoint is not connected 00:21:17.471 [2024-11-25 13:20:14.871346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1420310 (9): Bad file descriptor 00:21:17.471 [2024-11-25 13:20:14.872345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:17.471 [2024-11-25 13:20:14.872368] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:17.471 [2024-11-25 13:20:14.872397] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:17.471 [2024-11-25 13:20:14.872416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:17.471 request: 00:21:17.471 { 00:21:17.471 "name": "TLSTEST", 00:21:17.471 "trtype": "tcp", 00:21:17.471 "traddr": "10.0.0.2", 00:21:17.471 "adrfam": "ipv4", 00:21:17.471 "trsvcid": "4420", 00:21:17.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.471 "prchk_reftag": false, 00:21:17.471 "prchk_guard": false, 00:21:17.471 "hdgst": false, 00:21:17.471 "ddgst": false, 00:21:17.471 "psk": "key0", 00:21:17.471 "allow_unrecognized_csi": false, 00:21:17.471 "method": "bdev_nvme_attach_controller", 00:21:17.471 "req_id": 1 00:21:17.471 } 00:21:17.471 Got JSON-RPC error response 00:21:17.471 response: 00:21:17.471 { 00:21:17.471 "code": -5, 00:21:17.471 "message": "Input/output error" 00:21:17.471 } 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3193728 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3193728 ']' 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3193728 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193728 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193728' 00:21:17.471 killing process with pid 3193728 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3193728 00:21:17.471 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.471 00:21:17.471 Latency(us) 00:21:17.471 [2024-11-25T12:20:15.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.471 [2024-11-25T12:20:15.130Z] =================================================================================================================== 00:21:17.471 [2024-11-25T12:20:15.130Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.471 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3193728 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VuE6m34ny2 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VuE6m34ny2 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.VuE6m34ny2 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VuE6m34ny2 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3193869 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.471 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.472 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3193869 /var/tmp/bdevperf.sock 00:21:17.472 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3193869 ']' 00:21:17.472 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.472 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.472 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.472 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.472 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.730 [2024-11-25 13:20:15.169700] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:17.730 [2024-11-25 13:20:15.169804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193869 ] 00:21:17.730 [2024-11-25 13:20:15.239511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.730 [2024-11-25 13:20:15.298563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.988 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.988 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:17.988 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VuE6m34ny2 00:21:18.246 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:18.504 [2024-11-25 13:20:15.923697] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.504 [2024-11-25 13:20:15.932447] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:18.504 [2024-11-25 13:20:15.932479] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:18.504 [2024-11-25 13:20:15.932531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:18.504 [2024-11-25 13:20:15.932801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884310 (107): Transport endpoint is not connected 00:21:18.504 [2024-11-25 13:20:15.933791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x884310 (9): Bad file descriptor 00:21:18.504 [2024-11-25 13:20:15.934790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:18.504 [2024-11-25 13:20:15.934810] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:18.504 [2024-11-25 13:20:15.934822] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:18.504 [2024-11-25 13:20:15.934839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:18.504 request: 00:21:18.504 { 00:21:18.504 "name": "TLSTEST", 00:21:18.504 "trtype": "tcp", 00:21:18.504 "traddr": "10.0.0.2", 00:21:18.504 "adrfam": "ipv4", 00:21:18.504 "trsvcid": "4420", 00:21:18.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.504 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:18.504 "prchk_reftag": false, 00:21:18.504 "prchk_guard": false, 00:21:18.504 "hdgst": false, 00:21:18.504 "ddgst": false, 00:21:18.504 "psk": "key0", 00:21:18.504 "allow_unrecognized_csi": false, 00:21:18.504 "method": "bdev_nvme_attach_controller", 00:21:18.504 "req_id": 1 00:21:18.504 } 00:21:18.504 Got JSON-RPC error response 00:21:18.504 response: 00:21:18.504 { 00:21:18.504 "code": -5, 00:21:18.504 "message": "Input/output error" 00:21:18.504 } 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3193869 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3193869 ']' 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3193869 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193869 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193869' 00:21:18.504 killing process with pid 3193869 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3193869 00:21:18.504 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.504 00:21:18.504 Latency(us) 00:21:18.504 [2024-11-25T12:20:16.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.504 [2024-11-25T12:20:16.163Z] =================================================================================================================== 00:21:18.504 [2024-11-25T12:20:16.163Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.504 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3193869 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VuE6m34ny2 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VuE6m34ny2 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.VuE6m34ny2 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VuE6m34ny2 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3193978 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3193978 /var/tmp/bdevperf.sock 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3193978 ']' 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.763 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.763 [2024-11-25 13:20:16.260598] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:18.763 [2024-11-25 13:20:16.260697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193978 ] 00:21:18.763 [2024-11-25 13:20:16.333855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.763 [2024-11-25 13:20:16.393340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.021 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.021 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.021 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VuE6m34ny2 00:21:19.279 13:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:19.537 [2024-11-25 13:20:17.002732] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.537 [2024-11-25 13:20:17.008197] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:19.537 [2024-11-25 13:20:17.008228] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:19.537 [2024-11-25 13:20:17.008275] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:19.537 [2024-11-25 13:20:17.008811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226e310 (107): Transport endpoint is not connected 00:21:19.537 [2024-11-25 13:20:17.009802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226e310 (9): Bad file descriptor 00:21:19.537 [2024-11-25 13:20:17.010801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:19.537 [2024-11-25 13:20:17.010822] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:19.537 [2024-11-25 13:20:17.010835] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:19.537 [2024-11-25 13:20:17.010853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:19.537 request: 00:21:19.537 { 00:21:19.537 "name": "TLSTEST", 00:21:19.537 "trtype": "tcp", 00:21:19.537 "traddr": "10.0.0.2", 00:21:19.537 "adrfam": "ipv4", 00:21:19.537 "trsvcid": "4420", 00:21:19.537 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:19.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:19.537 "prchk_reftag": false, 00:21:19.537 "prchk_guard": false, 00:21:19.537 "hdgst": false, 00:21:19.537 "ddgst": false, 00:21:19.537 "psk": "key0", 00:21:19.537 "allow_unrecognized_csi": false, 00:21:19.537 "method": "bdev_nvme_attach_controller", 00:21:19.537 "req_id": 1 00:21:19.537 } 00:21:19.537 Got JSON-RPC error response 00:21:19.537 response: 00:21:19.537 { 00:21:19.537 "code": -5, 00:21:19.537 "message": "Input/output error" 00:21:19.537 } 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3193978 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3193978 ']' 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3193978 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193978 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193978' 00:21:19.537 killing process with pid 3193978 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3193978 00:21:19.537 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.537 00:21:19.537 Latency(us) 00:21:19.537 [2024-11-25T12:20:17.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.537 [2024-11-25T12:20:17.196Z] =================================================================================================================== 00:21:19.537 [2024-11-25T12:20:17.196Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.537 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3193978 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3194063 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3194063 /var/tmp/bdevperf.sock 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3194063 ']' 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.796 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.796 [2024-11-25 13:20:17.335330] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:19.796 [2024-11-25 13:20:17.335416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194063 ] 00:21:19.796 [2024-11-25 13:20:17.402397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.055 [2024-11-25 13:20:17.461497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.055 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:20.055 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:20.055 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:20.313 [2024-11-25 13:20:17.817741] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:20.313 [2024-11-25 13:20:17.817774] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:20.313 request: 00:21:20.313 { 00:21:20.313 "name": "key0", 00:21:20.313 "path": "", 00:21:20.313 "method": "keyring_file_add_key", 00:21:20.313 "req_id": 1 00:21:20.313 } 00:21:20.313 Got JSON-RPC error response 00:21:20.313 response: 00:21:20.313 { 00:21:20.313 "code": -1, 00:21:20.313 "message": "Operation not permitted" 00:21:20.313 } 00:21:20.313 13:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:20.573 [2024-11-25 13:20:18.086559] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.573 [2024-11-25 13:20:18.086622] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:20.573 request: 00:21:20.573 { 00:21:20.573 "name": "TLSTEST", 00:21:20.573 "trtype": "tcp", 00:21:20.573 "traddr": "10.0.0.2", 00:21:20.573 "adrfam": "ipv4", 00:21:20.573 "trsvcid": "4420", 00:21:20.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.573 "prchk_reftag": false, 00:21:20.573 "prchk_guard": false, 00:21:20.573 "hdgst": false, 00:21:20.573 "ddgst": false, 00:21:20.573 "psk": "key0", 00:21:20.573 "allow_unrecognized_csi": false, 00:21:20.573 "method": "bdev_nvme_attach_controller", 00:21:20.573 "req_id": 1 00:21:20.573 } 00:21:20.573 Got JSON-RPC error response 00:21:20.573 response: 00:21:20.573 { 00:21:20.573 "code": -126, 00:21:20.573 "message": "Required key not available" 00:21:20.573 } 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3194063 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3194063 ']' 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3194063 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194063 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194063' 00:21:20.573 killing process with pid 3194063 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3194063 00:21:20.573 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.573 00:21:20.573 Latency(us) 00:21:20.573 [2024-11-25T12:20:18.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.573 [2024-11-25T12:20:18.232Z] =================================================================================================================== 00:21:20.573 [2024-11-25T12:20:18.232Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.573 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3194063 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3190386 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3190386 ']' 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3190386 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3190386 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3190386' 00:21:20.830 killing process with pid 3190386 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3190386 00:21:20.830 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3190386 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.qFRJT7xJf9 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.qFRJT7xJf9 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3194306 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3194306 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3194306 ']' 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.089 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.089 [2024-11-25 13:20:18.695744] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:21.089 [2024-11-25 13:20:18.695840] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.347 [2024-11-25 13:20:18.767759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.347 [2024-11-25 13:20:18.826415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.348 [2024-11-25 13:20:18.826473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.348 [2024-11-25 13:20:18.826487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.348 [2024-11-25 13:20:18.826498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.348 [2024-11-25 13:20:18.826508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.348 [2024-11-25 13:20:18.827089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.qFRJT7xJf9 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qFRJT7xJf9 00:21:21.348 13:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.606 [2024-11-25 13:20:19.216387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.606 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:21.864 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:22.122 [2024-11-25 13:20:19.753890] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.122 [2024-11-25 13:20:19.754129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.122 13:20:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:22.689 malloc0 00:21:22.689 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:22.689 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qFRJT7xJf9 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qFRJT7xJf9 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3194593 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3194593 /var/tmp/bdevperf.sock 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3194593 ']' 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.255 13:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.514 [2024-11-25 13:20:20.920536] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:23.514 [2024-11-25 13:20:20.920624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194593 ] 00:21:23.514 [2024-11-25 13:20:20.986185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.514 [2024-11-25 13:20:21.045506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.514 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.514 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:23.514 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:21:24.081 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:24.339 [2024-11-25 13:20:21.782826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.339 TLSTESTn1 00:21:24.339 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:24.339 Running I/O for 10 seconds... 00:21:26.649 3191.00 IOPS, 12.46 MiB/s [2024-11-25T12:20:25.243Z] 3200.00 IOPS, 12.50 MiB/s [2024-11-25T12:20:26.177Z] 3226.33 IOPS, 12.60 MiB/s [2024-11-25T12:20:27.112Z] 3240.50 IOPS, 12.66 MiB/s [2024-11-25T12:20:28.047Z] 3242.40 IOPS, 12.67 MiB/s [2024-11-25T12:20:29.422Z] 3250.17 IOPS, 12.70 MiB/s [2024-11-25T12:20:30.356Z] 3251.71 IOPS, 12.70 MiB/s [2024-11-25T12:20:31.321Z] 3255.50 IOPS, 12.72 MiB/s [2024-11-25T12:20:32.254Z] 3241.33 IOPS, 12.66 MiB/s [2024-11-25T12:20:32.254Z] 3241.60 IOPS, 12.66 MiB/s 00:21:34.595 Latency(us) 00:21:34.595 [2024-11-25T12:20:32.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.595 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.595 Verification LBA range: start 0x0 length 0x2000 00:21:34.595 TLSTESTn1 : 10.02 3247.27 12.68 0.00 0.00 39355.27 6990.51 34758.35 00:21:34.595 [2024-11-25T12:20:32.254Z] =================================================================================================================== 00:21:34.595 [2024-11-25T12:20:32.254Z] Total : 3247.27 12.68 0.00 0.00 39355.27 6990.51 34758.35 00:21:34.595 { 00:21:34.595 "results": [ 00:21:34.595 { 00:21:34.595 "job": "TLSTESTn1", 00:21:34.595 "core_mask": "0x4", 00:21:34.595 "workload": "verify", 00:21:34.595 "status": "finished", 00:21:34.595 "verify_range": { 00:21:34.595 "start": 0, 00:21:34.595 "length": 8192 00:21:34.595 }, 00:21:34.595 "queue_depth": 128, 00:21:34.595 "io_size": 4096, 00:21:34.595 "runtime": 10.021644, 00:21:34.595 "iops": 3247.271605337408, 00:21:34.595 "mibps": 12.68465470834925, 00:21:34.595 "io_failed": 0, 00:21:34.595 "io_timeout": 0, 00:21:34.595 "avg_latency_us": 39355.27418524323, 00:21:34.595 "min_latency_us": 6990.506666666667, 00:21:34.595 "max_latency_us": 34758.35259259259 00:21:34.595 } 00:21:34.595 ], 00:21:34.595 "core_count": 1 00:21:34.595 } 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3194593 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3194593 ']' 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3194593 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194593 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194593' 00:21:34.595 killing process with pid 3194593 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3194593 00:21:34.595 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.595 00:21:34.595 Latency(us) 00:21:34.595 [2024-11-25T12:20:32.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.595 [2024-11-25T12:20:32.254Z] =================================================================================================================== 00:21:34.595 [2024-11-25T12:20:32.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.595 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3194593 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.qFRJT7xJf9 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qFRJT7xJf9 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qFRJT7xJf9 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qFRJT7xJf9 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qFRJT7xJf9 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3195917 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3195917 /var/tmp/bdevperf.sock 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3195917 ']' 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.854 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.854 [2024-11-25 13:20:32.353058] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:34.854 [2024-11-25 13:20:32.353162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195917 ] 00:21:34.854 [2024-11-25 13:20:32.423449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.854 [2024-11-25 13:20:32.480946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.112 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.112 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.112 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:21:35.370 [2024-11-25 13:20:32.827906] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qFRJT7xJf9': 0100666 00:21:35.370 [2024-11-25 13:20:32.827950] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:35.370 request: 00:21:35.370 { 00:21:35.370 "name": "key0", 00:21:35.370 "path": "/tmp/tmp.qFRJT7xJf9", 00:21:35.370 "method": "keyring_file_add_key", 00:21:35.370 "req_id": 1 00:21:35.370 } 00:21:35.370 Got JSON-RPC error response 00:21:35.370 response: 00:21:35.370 { 00:21:35.370 "code": -1, 00:21:35.370 "message": "Operation not permitted" 00:21:35.370 } 00:21:35.370 13:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:35.628 [2024-11-25 13:20:33.092739] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.628 [2024-11-25 13:20:33.092797] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:35.628 request: 00:21:35.628 { 00:21:35.628 "name": "TLSTEST", 00:21:35.628 "trtype": "tcp", 00:21:35.628 "traddr": "10.0.0.2", 00:21:35.628 "adrfam": "ipv4", 00:21:35.628 "trsvcid": "4420", 00:21:35.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:35.628 "prchk_reftag": false, 00:21:35.628 "prchk_guard": false, 00:21:35.628 "hdgst": false, 00:21:35.628 "ddgst": false, 00:21:35.628 "psk": "key0", 00:21:35.628 "allow_unrecognized_csi": false, 00:21:35.628 "method": "bdev_nvme_attach_controller", 00:21:35.628 "req_id": 1 00:21:35.628 } 00:21:35.628 Got JSON-RPC error response 00:21:35.628 response: 00:21:35.629 { 00:21:35.629 "code": -126, 00:21:35.629 "message": "Required key not available" 00:21:35.629 } 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3195917 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3195917 ']' 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3195917 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195917 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195917' 00:21:35.629 killing process with pid 3195917 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3195917 00:21:35.629 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.629 00:21:35.629 Latency(us) 00:21:35.629 [2024-11-25T12:20:33.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.629 [2024-11-25T12:20:33.288Z] =================================================================================================================== 00:21:35.629 [2024-11-25T12:20:33.288Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:35.629 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3195917 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3194306 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3194306 ']' 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3194306 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194306 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194306' 00:21:35.887 killing process with pid 3194306 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3194306 00:21:35.887 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3194306 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3196070 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3196070 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3196070 ']' 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.145 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.145 [2024-11-25 13:20:33.662955] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:36.145 [2024-11-25 13:20:33.663048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.145 [2024-11-25 13:20:33.734773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.146 [2024-11-25 13:20:33.792749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.146 [2024-11-25 13:20:33.792807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.146 [2024-11-25 13:20:33.792821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.146 [2024-11-25 13:20:33.792831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.146 [2024-11-25 13:20:33.792840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.146 [2024-11-25 13:20:33.793437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.qFRJT7xJf9 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qFRJT7xJf9 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.qFRJT7xJf9 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qFRJT7xJf9 00:21:36.404 13:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.661 [2024-11-25 13:20:34.245169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.661 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:36.919 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:37.178 [2024-11-25 13:20:34.830792] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:37.178 [2024-11-25 13:20:34.831041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.435 13:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:37.693 malloc0 00:21:37.694 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:37.951 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:21:38.209 [2024-11-25 13:20:35.748509] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qFRJT7xJf9': 0100666 00:21:38.209 [2024-11-25 13:20:35.748546] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:38.209 request: 00:21:38.209 { 00:21:38.209 "name": "key0", 00:21:38.209 "path": "/tmp/tmp.qFRJT7xJf9", 00:21:38.209 "method": "keyring_file_add_key", 00:21:38.209 "req_id": 1 00:21:38.209 } 00:21:38.209 Got JSON-RPC error response 00:21:38.209 response: 00:21:38.209 { 00:21:38.209 "code": -1, 00:21:38.209 "message": "Operation not permitted" 00:21:38.209 } 00:21:38.209 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:38.468 [2024-11-25 13:20:36.025255] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:38.468 [2024-11-25 13:20:36.025332] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:38.468 request: 00:21:38.468 { 00:21:38.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.468 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.468 "psk": "key0", 00:21:38.468 "method": "nvmf_subsystem_add_host", 00:21:38.468 "req_id": 1 00:21:38.468 } 00:21:38.468 Got JSON-RPC error response 00:21:38.468 response: 00:21:38.468 { 00:21:38.468 "code": -32603, 00:21:38.468 "message": "Internal error" 00:21:38.468 } 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3196070 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3196070 ']' 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3196070 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196070 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196070' 00:21:38.468 killing process with pid 3196070 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3196070 00:21:38.468 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3196070 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.qFRJT7xJf9 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3196372 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3196372 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3196372 ']' 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.726 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.726 [2024-11-25 13:20:36.353070] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:38.726 [2024-11-25 13:20:36.353166] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.984 [2024-11-25 13:20:36.423940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.984 [2024-11-25 13:20:36.479660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.984 [2024-11-25 13:20:36.479713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.984 [2024-11-25 13:20:36.479734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.984 [2024-11-25 13:20:36.479745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.984 [2024-11-25 13:20:36.479755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.984 [2024-11-25 13:20:36.480326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.qFRJT7xJf9 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qFRJT7xJf9 00:21:38.984 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:39.242 [2024-11-25 13:20:36.852157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.242 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:39.500 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:40.066 [2024-11-25 13:20:37.445783] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:40.066 [2024-11-25 13:20:37.446019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.066 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:40.324 malloc0 00:21:40.324 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:40.583 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:21:40.842 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3196658 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3196658 /var/tmp/bdevperf.sock 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3196658 ']' 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.100 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.100 [2024-11-25 13:20:38.602662] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:41.100 [2024-11-25 13:20:38.602733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196658 ] 00:21:41.100 [2024-11-25 13:20:38.666870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.100 [2024-11-25 13:20:38.724203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.359 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.359 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:41.359 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:21:41.617 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:41.885 [2024-11-25 13:20:39.465697] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.885 TLSTESTn1 00:21:42.145 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:42.404 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:42.404 "subsystems": [ 00:21:42.404 { 00:21:42.404 "subsystem": "keyring", 00:21:42.404 "config": [ 00:21:42.404 { 00:21:42.404 "method": "keyring_file_add_key", 00:21:42.404 "params": { 00:21:42.404 "name": "key0", 00:21:42.404 "path": "/tmp/tmp.qFRJT7xJf9" 00:21:42.404 } 00:21:42.404 } 00:21:42.404 ] 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "subsystem": "iobuf", 00:21:42.404 "config": [ 00:21:42.404 { 00:21:42.404 "method": "iobuf_set_options", 00:21:42.404 "params": { 00:21:42.404 "small_pool_count": 8192, 00:21:42.404 "large_pool_count": 1024, 00:21:42.404 "small_bufsize": 8192, 00:21:42.404 "large_bufsize": 135168, 00:21:42.404 "enable_numa": false 00:21:42.404 } 00:21:42.404 } 00:21:42.404 ] 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "subsystem": "sock", 00:21:42.404 "config": [ 00:21:42.404 { 00:21:42.404 "method": "sock_set_default_impl", 00:21:42.404 "params": { 00:21:42.404 "impl_name": "posix" 00:21:42.404 } 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "method": "sock_impl_set_options", 00:21:42.404 "params": { 00:21:42.404 "impl_name": "ssl", 00:21:42.404 "recv_buf_size": 4096, 00:21:42.404 "send_buf_size": 4096, 00:21:42.404 "enable_recv_pipe": true, 00:21:42.404 "enable_quickack": false, 00:21:42.404 "enable_placement_id": 0, 00:21:42.404 "enable_zerocopy_send_server": true, 00:21:42.404 "enable_zerocopy_send_client": false, 00:21:42.404 "zerocopy_threshold": 0, 00:21:42.404 "tls_version": 0, 00:21:42.404 "enable_ktls": false 00:21:42.404 } 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "method": "sock_impl_set_options", 00:21:42.404 "params": { 00:21:42.404 "impl_name": "posix", 00:21:42.404 "recv_buf_size": 2097152, 00:21:42.404 "send_buf_size": 2097152, 00:21:42.404 "enable_recv_pipe": true, 00:21:42.404 "enable_quickack": false, 00:21:42.404 "enable_placement_id": 0, 00:21:42.404 "enable_zerocopy_send_server": true, 00:21:42.404 "enable_zerocopy_send_client": false, 00:21:42.404 "zerocopy_threshold": 0, 00:21:42.404 "tls_version": 0, 00:21:42.404 "enable_ktls": false 00:21:42.404 } 00:21:42.404 } 00:21:42.404 ] 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "subsystem": "vmd", 00:21:42.404 "config": [] 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "subsystem": "accel", 00:21:42.404 "config": [ 00:21:42.404 { 00:21:42.404 "method": "accel_set_options", 00:21:42.404 "params": { 00:21:42.404 "small_cache_size": 128, 00:21:42.404 "large_cache_size": 16, 00:21:42.404 "task_count": 2048, 00:21:42.404 "sequence_count": 2048, 00:21:42.404 "buf_count": 2048 00:21:42.404 } 00:21:42.404 } 00:21:42.404 ] 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "subsystem": "bdev", 00:21:42.404 "config": [ 00:21:42.404 { 00:21:42.404 "method": "bdev_set_options", 00:21:42.404 "params": { 00:21:42.404 "bdev_io_pool_size": 65535, 00:21:42.404 "bdev_io_cache_size": 256, 00:21:42.404 "bdev_auto_examine": true, 00:21:42.404 "iobuf_small_cache_size": 128, 00:21:42.404 "iobuf_large_cache_size": 16 00:21:42.404 } 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "method": "bdev_raid_set_options", 00:21:42.404 "params": { 00:21:42.404 "process_window_size_kb": 1024, 00:21:42.404 "process_max_bandwidth_mb_sec": 0 00:21:42.404 } 00:21:42.404 }, 00:21:42.404 { 00:21:42.404 "method": "bdev_iscsi_set_options", 00:21:42.404 "params": { 00:21:42.405 "timeout_sec": 30 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "bdev_nvme_set_options", 00:21:42.405 "params": { 00:21:42.405 "action_on_timeout": "none", 00:21:42.405 "timeout_us": 0, 00:21:42.405 "timeout_admin_us": 0, 00:21:42.405 "keep_alive_timeout_ms": 10000, 00:21:42.405 "arbitration_burst": 0, 00:21:42.405 "low_priority_weight": 0, 00:21:42.405 "medium_priority_weight": 0, 00:21:42.405 "high_priority_weight": 0, 00:21:42.405 "nvme_adminq_poll_period_us": 10000, 00:21:42.405 "nvme_ioq_poll_period_us": 0, 00:21:42.405 "io_queue_requests": 0, 00:21:42.405 "delay_cmd_submit": true, 00:21:42.405 "transport_retry_count": 4, 00:21:42.405 "bdev_retry_count": 3, 00:21:42.405 "transport_ack_timeout": 0, 00:21:42.405 "ctrlr_loss_timeout_sec": 0, 00:21:42.405 "reconnect_delay_sec": 0, 00:21:42.405 "fast_io_fail_timeout_sec": 0, 00:21:42.405 "disable_auto_failback": false, 00:21:42.405 "generate_uuids": false, 00:21:42.405 "transport_tos": 0, 00:21:42.405 "nvme_error_stat": false, 00:21:42.405 "rdma_srq_size": 0, 00:21:42.405 "io_path_stat": false, 00:21:42.405 "allow_accel_sequence": false, 00:21:42.405 "rdma_max_cq_size": 0, 00:21:42.405 "rdma_cm_event_timeout_ms": 0, 00:21:42.405 "dhchap_digests": [ 00:21:42.405 "sha256", 00:21:42.405 "sha384", 00:21:42.405 "sha512" 00:21:42.405 ], 00:21:42.405 "dhchap_dhgroups": [ 00:21:42.405 "null", 00:21:42.405 "ffdhe2048", 00:21:42.405 "ffdhe3072", 00:21:42.405 "ffdhe4096", 00:21:42.405 "ffdhe6144", 00:21:42.405 "ffdhe8192" 00:21:42.405 ] 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "bdev_nvme_set_hotplug", 00:21:42.405 "params": { 00:21:42.405 "period_us": 100000, 00:21:42.405 "enable": false 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "bdev_malloc_create", 00:21:42.405 "params": { 00:21:42.405 "name": "malloc0", 00:21:42.405 "num_blocks": 8192, 00:21:42.405 "block_size": 4096, 00:21:42.405 "physical_block_size": 4096, 00:21:42.405 "uuid": "b16705e3-10d4-4b96-8fdb-18061691c80a", 00:21:42.405 "optimal_io_boundary": 0, 00:21:42.405 "md_size": 0, 00:21:42.405 "dif_type": 0, 00:21:42.405 "dif_is_head_of_md": false, 00:21:42.405 "dif_pi_format": 0 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "bdev_wait_for_examine" 00:21:42.405 } 00:21:42.405 ] 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "subsystem": "nbd", 00:21:42.405 "config": [] 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "subsystem": "scheduler", 00:21:42.405 "config": [ 00:21:42.405 { 00:21:42.405 "method": "framework_set_scheduler", 00:21:42.405 "params": { 00:21:42.405 "name": "static" 00:21:42.405 } 00:21:42.405 } 00:21:42.405 ] 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "subsystem": "nvmf", 00:21:42.405 "config": [ 00:21:42.405 { 00:21:42.405 "method": "nvmf_set_config", 00:21:42.405 "params": { 00:21:42.405 "discovery_filter": "match_any", 00:21:42.405 "admin_cmd_passthru": { 00:21:42.405 "identify_ctrlr": false 00:21:42.405 }, 00:21:42.405 "dhchap_digests": [ 00:21:42.405 "sha256", 00:21:42.405 "sha384", 00:21:42.405 "sha512" 00:21:42.405 ], 00:21:42.405 "dhchap_dhgroups": [ 00:21:42.405 "null", 00:21:42.405 "ffdhe2048", 00:21:42.405 "ffdhe3072", 00:21:42.405 "ffdhe4096", 00:21:42.405 "ffdhe6144", 00:21:42.405 "ffdhe8192" 00:21:42.405 ] 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "nvmf_set_max_subsystems", 00:21:42.405 "params": { 00:21:42.405 "max_subsystems": 1024 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "nvmf_set_crdt", 00:21:42.405 "params": { 00:21:42.405 "crdt1": 0, 00:21:42.405 "crdt2": 0, 00:21:42.405 "crdt3": 0 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "nvmf_create_transport", 00:21:42.405 "params": { 00:21:42.405 "trtype": "TCP", 00:21:42.405 "max_queue_depth": 128, 00:21:42.405 "max_io_qpairs_per_ctrlr": 127, 00:21:42.405 "in_capsule_data_size": 4096, 00:21:42.405 "max_io_size": 131072, 00:21:42.405 "io_unit_size": 131072, 00:21:42.405 "max_aq_depth": 128, 00:21:42.405 "num_shared_buffers": 511, 00:21:42.405 "buf_cache_size": 4294967295, 00:21:42.405 "dif_insert_or_strip": false, 00:21:42.405 "zcopy": false, 00:21:42.405 "c2h_success": false, 00:21:42.405 "sock_priority": 0, 00:21:42.405 "abort_timeout_sec": 1, 00:21:42.405 "ack_timeout": 0, 00:21:42.405 "data_wr_pool_size": 0 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "nvmf_create_subsystem", 00:21:42.405 "params": { 00:21:42.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.405 "allow_any_host": false, 00:21:42.405 "serial_number": "SPDK00000000000001", 00:21:42.405 "model_number": "SPDK bdev Controller", 00:21:42.405 "max_namespaces": 10, 00:21:42.405 "min_cntlid": 1, 00:21:42.405 "max_cntlid": 65519, 00:21:42.405 "ana_reporting": false 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "nvmf_subsystem_add_host", 00:21:42.405 "params": { 00:21:42.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.405 "host": "nqn.2016-06.io.spdk:host1", 00:21:42.405 "psk": "key0" 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "nvmf_subsystem_add_ns", 00:21:42.405 "params": { 00:21:42.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.405 "namespace": { 00:21:42.405 "nsid": 1, 00:21:42.405 "bdev_name": "malloc0", 00:21:42.405 "nguid": "B16705E310D44B968FDB18061691C80A", 00:21:42.405 "uuid": "b16705e3-10d4-4b96-8fdb-18061691c80a", 00:21:42.405 "no_auto_visible": false 00:21:42.405 } 00:21:42.405 } 00:21:42.405 }, 00:21:42.405 { 00:21:42.405 "method": "nvmf_subsystem_add_listener", 00:21:42.405 "params": { 00:21:42.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.405 "listen_address": { 00:21:42.405 "trtype": "TCP", 00:21:42.405 "adrfam": "IPv4", 00:21:42.405 "traddr": "10.0.0.2", 00:21:42.405 "trsvcid": "4420" 00:21:42.405 }, 00:21:42.405 "secure_channel": true 00:21:42.405 } 00:21:42.405 } 00:21:42.406 ] 00:21:42.406 } 00:21:42.406 ] 00:21:42.406 }' 00:21:42.406 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:42.972 "subsystems": [ 00:21:42.972 { 00:21:42.972 "subsystem": "keyring", 00:21:42.972 "config": [ 00:21:42.972 { 00:21:42.972 "method": "keyring_file_add_key", 00:21:42.972 "params": { 00:21:42.972 "name": "key0", 00:21:42.972 "path": "/tmp/tmp.qFRJT7xJf9" 00:21:42.972 } 00:21:42.972 } 00:21:42.972 ] 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "subsystem": "iobuf", 00:21:42.972 "config": [ 00:21:42.972 { 00:21:42.972 "method": "iobuf_set_options", 00:21:42.972 "params": { 00:21:42.972 "small_pool_count": 8192, 00:21:42.972 "large_pool_count": 1024, 00:21:42.972 "small_bufsize": 8192, 00:21:42.972 "large_bufsize": 135168, 00:21:42.972 "enable_numa": false 00:21:42.972 } 00:21:42.972 } 00:21:42.972 ] 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "subsystem": "sock", 00:21:42.972 "config": [ 00:21:42.972 { 00:21:42.972 "method": "sock_set_default_impl", 00:21:42.972 "params": { 00:21:42.972 "impl_name": "posix" 00:21:42.972 } 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "method": "sock_impl_set_options", 00:21:42.972 "params": { 00:21:42.972 "impl_name": "ssl", 00:21:42.972 "recv_buf_size": 4096, 00:21:42.972 "send_buf_size": 4096, 00:21:42.972 "enable_recv_pipe": true, 00:21:42.972 "enable_quickack": false, 00:21:42.972 "enable_placement_id": 0, 00:21:42.972 "enable_zerocopy_send_server": true, 00:21:42.972 "enable_zerocopy_send_client": false, 00:21:42.972 "zerocopy_threshold": 0, 00:21:42.972 "tls_version": 0, 00:21:42.972 "enable_ktls": false 00:21:42.972 } 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "method": "sock_impl_set_options", 00:21:42.972 "params": { 00:21:42.972 "impl_name": "posix", 00:21:42.972 "recv_buf_size": 2097152, 00:21:42.972 "send_buf_size": 2097152, 00:21:42.972 "enable_recv_pipe": true, 00:21:42.972 "enable_quickack": false, 00:21:42.972 "enable_placement_id": 0, 00:21:42.972 "enable_zerocopy_send_server": true, 00:21:42.972 "enable_zerocopy_send_client": false, 00:21:42.972 "zerocopy_threshold": 0, 00:21:42.972 "tls_version": 0, 00:21:42.972 "enable_ktls": false 00:21:42.972 } 00:21:42.972 } 00:21:42.972 ] 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "subsystem": "vmd", 00:21:42.972 "config": [] 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "subsystem": "accel", 00:21:42.972 "config": [ 00:21:42.972 { 00:21:42.972 "method": "accel_set_options", 00:21:42.972 "params": { 00:21:42.972 "small_cache_size": 128, 00:21:42.972 "large_cache_size": 16, 00:21:42.972 "task_count": 2048, 00:21:42.972 "sequence_count": 2048, 00:21:42.972 "buf_count": 2048 00:21:42.972 } 00:21:42.972 } 00:21:42.972 ] 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "subsystem": "bdev", 00:21:42.972 "config": [ 00:21:42.972 { 00:21:42.972 "method": "bdev_set_options", 00:21:42.972 "params": { 00:21:42.972 "bdev_io_pool_size": 65535, 00:21:42.972 "bdev_io_cache_size": 256, 00:21:42.972 "bdev_auto_examine": true, 00:21:42.972 "iobuf_small_cache_size": 128, 00:21:42.972 "iobuf_large_cache_size": 16 00:21:42.972 } 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "method": "bdev_raid_set_options", 00:21:42.972 "params": { 00:21:42.972 "process_window_size_kb": 1024, 00:21:42.972 "process_max_bandwidth_mb_sec": 0 00:21:42.972 } 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "method": "bdev_iscsi_set_options", 00:21:42.972 "params": { 00:21:42.972 "timeout_sec": 30 00:21:42.972 } 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "method": "bdev_nvme_set_options", 00:21:42.972 "params": { 00:21:42.972 "action_on_timeout": "none", 00:21:42.972 "timeout_us": 0, 00:21:42.972 "timeout_admin_us": 0, 00:21:42.972 "keep_alive_timeout_ms": 10000, 00:21:42.972 "arbitration_burst": 0, 00:21:42.972 "low_priority_weight": 0, 00:21:42.972 "medium_priority_weight": 0, 00:21:42.972 "high_priority_weight": 0, 00:21:42.972 "nvme_adminq_poll_period_us": 10000, 00:21:42.972 "nvme_ioq_poll_period_us": 0, 00:21:42.972 "io_queue_requests": 512, 00:21:42.972 "delay_cmd_submit": true, 00:21:42.972 "transport_retry_count": 4, 00:21:42.972 "bdev_retry_count": 3, 00:21:42.972 "transport_ack_timeout": 0, 00:21:42.972 "ctrlr_loss_timeout_sec": 0, 00:21:42.972 "reconnect_delay_sec": 0, 00:21:42.972 "fast_io_fail_timeout_sec": 0, 00:21:42.972 "disable_auto_failback": false, 00:21:42.972 "generate_uuids": false, 00:21:42.972 "transport_tos": 0, 00:21:42.972 "nvme_error_stat": false, 00:21:42.972 "rdma_srq_size": 0, 00:21:42.972 "io_path_stat": false, 00:21:42.972 "allow_accel_sequence": false, 00:21:42.972 "rdma_max_cq_size": 0, 00:21:42.972 "rdma_cm_event_timeout_ms": 0, 00:21:42.972 "dhchap_digests": [ 00:21:42.972 "sha256", 00:21:42.972 "sha384", 00:21:42.972 "sha512" 00:21:42.972 ], 00:21:42.972 "dhchap_dhgroups": [ 00:21:42.972 "null", 00:21:42.972 "ffdhe2048", 00:21:42.972 "ffdhe3072", 00:21:42.972 "ffdhe4096", 00:21:42.972 "ffdhe6144", 00:21:42.972 "ffdhe8192" 00:21:42.972 ] 00:21:42.972 } 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "method": "bdev_nvme_attach_controller", 00:21:42.972 "params": { 00:21:42.972 "name": "TLSTEST", 00:21:42.972 "trtype": "TCP", 00:21:42.972 "adrfam": "IPv4", 00:21:42.972 "traddr": "10.0.0.2", 00:21:42.972 "trsvcid": "4420", 00:21:42.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.972 "prchk_reftag": false, 00:21:42.972 "prchk_guard": false, 00:21:42.972 "ctrlr_loss_timeout_sec": 0, 00:21:42.972 "reconnect_delay_sec": 0, 00:21:42.972 "fast_io_fail_timeout_sec": 0, 00:21:42.972 "psk": "key0", 00:21:42.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.972 "hdgst": false, 00:21:42.972 "ddgst": false, 00:21:42.972 "multipath": "multipath" 00:21:42.972 } 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "method": "bdev_nvme_set_hotplug", 00:21:42.972 "params": { 00:21:42.972 "period_us": 100000, 00:21:42.972 "enable": false 00:21:42.972 } 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "method": "bdev_wait_for_examine" 00:21:42.972 } 00:21:42.972 ] 00:21:42.972 }, 00:21:42.972 { 00:21:42.972 "subsystem": "nbd", 00:21:42.972 "config": [] 00:21:42.972 } 00:21:42.972 ] 00:21:42.972 }' 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3196658 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3196658 ']' 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3196658 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196658 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196658' 00:21:42.972 killing process with pid 3196658 00:21:42.972 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3196658 00:21:42.972 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.972 00:21:42.972 Latency(us) 00:21:42.972 [2024-11-25T12:20:40.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.972 [2024-11-25T12:20:40.631Z] =================================================================================================================== 00:21:42.972 [2024-11-25T12:20:40.632Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:42.973 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3196658 00:21:42.973 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3196372 00:21:42.973 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3196372 ']' 00:21:42.973 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3196372 00:21:42.973 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:42.973 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.973 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196372 00:21:43.232 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:43.232 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:43.232 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196372' 00:21:43.232 killing process with pid 3196372 00:21:43.232 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3196372 00:21:43.232 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3196372 00:21:43.232 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:43.232 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:43.232 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:43.232 "subsystems": [ 00:21:43.232 { 00:21:43.232 "subsystem": "keyring", 00:21:43.232 "config": [ 00:21:43.232 { 00:21:43.232 "method": "keyring_file_add_key", 00:21:43.232 "params": { 00:21:43.232 "name": "key0", 00:21:43.232 "path": "/tmp/tmp.qFRJT7xJf9" 00:21:43.232 } 00:21:43.232 } 00:21:43.232 ] 00:21:43.232 }, 00:21:43.232 { 00:21:43.232 "subsystem": "iobuf", 00:21:43.232 "config": [ 00:21:43.232 { 00:21:43.232 "method": "iobuf_set_options", 00:21:43.232 "params": { 00:21:43.232 "small_pool_count": 8192, 00:21:43.232 "large_pool_count": 1024, 00:21:43.232 "small_bufsize": 8192, 00:21:43.232 "large_bufsize": 135168, 00:21:43.232 "enable_numa": false 00:21:43.232 } 00:21:43.232 } 00:21:43.232 ] 00:21:43.232 }, 00:21:43.232 { 00:21:43.232 "subsystem": "sock", 00:21:43.232 "config": [ 00:21:43.232 { 00:21:43.232 "method": "sock_set_default_impl", 00:21:43.232 "params": { 00:21:43.232 "impl_name": "posix" 00:21:43.232 } 00:21:43.232 }, 00:21:43.232 { 00:21:43.232 "method": "sock_impl_set_options", 00:21:43.232 "params": { 00:21:43.232 "impl_name": "ssl", 00:21:43.232 "recv_buf_size": 4096, 00:21:43.232 "send_buf_size": 4096, 00:21:43.232 "enable_recv_pipe": true, 00:21:43.232 "enable_quickack": false, 00:21:43.232 "enable_placement_id": 0, 00:21:43.232 "enable_zerocopy_send_server": true, 00:21:43.232 "enable_zerocopy_send_client": false, 00:21:43.232 "zerocopy_threshold": 0, 00:21:43.232 "tls_version": 0, 00:21:43.232 "enable_ktls": false 00:21:43.232 } 00:21:43.232 }, 00:21:43.232 { 00:21:43.232 "method": "sock_impl_set_options", 00:21:43.232 "params": { 00:21:43.232 "impl_name": "posix", 00:21:43.232 "recv_buf_size": 2097152, 00:21:43.232 "send_buf_size": 2097152, 00:21:43.232 "enable_recv_pipe": true, 00:21:43.232 "enable_quickack": false, 00:21:43.232 "enable_placement_id": 0, 00:21:43.232 "enable_zerocopy_send_server": true, 00:21:43.232 "enable_zerocopy_send_client": false, 00:21:43.232 "zerocopy_threshold": 0, 00:21:43.232 "tls_version": 0, 00:21:43.232 "enable_ktls": false 00:21:43.232 } 00:21:43.232 } 00:21:43.233 ] 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "subsystem": "vmd", 00:21:43.233 "config": [] 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "subsystem": "accel", 00:21:43.233 "config": [ 00:21:43.233 { 00:21:43.233 "method": "accel_set_options", 00:21:43.233 "params": { 00:21:43.233 "small_cache_size": 128, 00:21:43.233 "large_cache_size": 16, 00:21:43.233 "task_count": 2048, 00:21:43.233 "sequence_count": 2048, 00:21:43.233 "buf_count": 2048 00:21:43.233 } 00:21:43.233 } 00:21:43.233 ] 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "subsystem": "bdev", 00:21:43.233 "config": [ 00:21:43.233 { 00:21:43.233 "method": "bdev_set_options", 00:21:43.233 "params": { 00:21:43.233 "bdev_io_pool_size": 65535, 00:21:43.233 "bdev_io_cache_size": 256, 00:21:43.233 "bdev_auto_examine": true, 00:21:43.233 "iobuf_small_cache_size": 128, 00:21:43.233 "iobuf_large_cache_size": 16 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "bdev_raid_set_options", 00:21:43.233 "params": { 00:21:43.233 "process_window_size_kb": 1024, 00:21:43.233 "process_max_bandwidth_mb_sec": 0 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "bdev_iscsi_set_options", 00:21:43.233 "params": { 00:21:43.233 "timeout_sec": 30 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "bdev_nvme_set_options", 00:21:43.233 "params": { 00:21:43.233 "action_on_timeout": "none", 00:21:43.233 "timeout_us": 0, 00:21:43.233 "timeout_admin_us": 0, 00:21:43.233 "keep_alive_timeout_ms": 10000, 00:21:43.233 "arbitration_burst": 0, 00:21:43.233 "low_priority_weight": 0, 00:21:43.233 "medium_priority_weight": 0, 00:21:43.233 "high_priority_weight": 0, 00:21:43.233 "nvme_adminq_poll_period_us": 10000, 00:21:43.233 "nvme_ioq_poll_period_us": 0, 00:21:43.233 "io_queue_requests": 0, 00:21:43.233 "delay_cmd_submit": true, 00:21:43.233 "transport_retry_count": 4, 00:21:43.233 "bdev_retry_count": 3, 00:21:43.233 "transport_ack_timeout": 0, 00:21:43.233 "ctrlr_loss_timeout_sec": 0, 00:21:43.233 "reconnect_delay_sec": 0, 00:21:43.233 "fast_io_fail_timeout_sec": 0, 00:21:43.233 "disable_auto_failback": false, 00:21:43.233 "generate_uuids": false, 00:21:43.233 "transport_tos": 0, 00:21:43.233 "nvme_error_stat": false, 00:21:43.233 "rdma_srq_size": 0, 00:21:43.233 "io_path_stat": false, 00:21:43.233 "allow_accel_sequence": false, 00:21:43.233 "rdma_max_cq_size": 0, 00:21:43.233 "rdma_cm_event_timeout_ms": 0, 00:21:43.233 "dhchap_digests": [ 00:21:43.233 "sha256", 00:21:43.233 "sha384", 00:21:43.233 "sha512" 00:21:43.233 ], 00:21:43.233 "dhchap_dhgroups": [ 00:21:43.233 "null", 00:21:43.233 "ffdhe2048", 00:21:43.233 "ffdhe3072", 00:21:43.233 "ffdhe4096", 00:21:43.233 "ffdhe6144", 00:21:43.233 "ffdhe8192" 00:21:43.233 ] 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "bdev_nvme_set_hotplug", 00:21:43.233 "params": { 00:21:43.233 "period_us": 100000, 00:21:43.233 "enable": false 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "bdev_malloc_create", 00:21:43.233 "params": { 00:21:43.233 "name": "malloc0", 00:21:43.233 "num_blocks": 8192, 00:21:43.233 "block_size": 4096, 00:21:43.233 "physical_block_size": 4096, 00:21:43.233 "uuid": "b16705e3-10d4-4b96-8fdb-18061691c80a", 00:21:43.233 "optimal_io_boundary": 0, 00:21:43.233 "md_size": 0, 00:21:43.233 "dif_type": 0, 00:21:43.233 "dif_is_head_of_md": false, 00:21:43.233 "dif_pi_format": 0 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "bdev_wait_for_examine" 00:21:43.233 } 00:21:43.233 ] 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "subsystem": "nbd", 00:21:43.233 "config": [] 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "subsystem": "scheduler", 00:21:43.233 "config": [ 00:21:43.233 { 00:21:43.233 "method": "framework_set_scheduler", 00:21:43.233 "params": { 00:21:43.233 "name": "static" 00:21:43.233 } 00:21:43.233 } 00:21:43.233 ] 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "subsystem": "nvmf", 00:21:43.233 "config": [ 00:21:43.233 { 00:21:43.233 "method": "nvmf_set_config", 00:21:43.233 "params": { 00:21:43.233 "discovery_filter": "match_any", 00:21:43.233 "admin_cmd_passthru": { 00:21:43.233 "identify_ctrlr": false 00:21:43.233 }, 00:21:43.233 "dhchap_digests": [ 00:21:43.233 "sha256", 00:21:43.233 "sha384", 00:21:43.233 "sha512" 00:21:43.233 ], 00:21:43.233 "dhchap_dhgroups": [ 00:21:43.233 "null", 00:21:43.233 "ffdhe2048", 00:21:43.233 "ffdhe3072", 00:21:43.233 "ffdhe4096", 00:21:43.233 "ffdhe6144", 00:21:43.233 "ffdhe8192" 00:21:43.233 ] 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "nvmf_set_max_subsystems", 00:21:43.233 "params": { 00:21:43.233 "max_subsystems": 1024 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "nvmf_set_crdt", 00:21:43.233 "params": { 00:21:43.233 "crdt1": 0, 00:21:43.233 "crdt2": 0, 00:21:43.233 "crdt3": 0 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "nvmf_create_transport", 00:21:43.233 "params": { 00:21:43.233 "trtype": "TCP", 00:21:43.233 "max_queue_depth": 128, 00:21:43.233 "max_io_qpairs_per_ctrlr": 127, 00:21:43.233 "in_capsule_data_size": 4096, 00:21:43.233 "max_io_size": 131072, 00:21:43.233 "io_unit_size": 131072, 00:21:43.233 "max_aq_depth": 128, 00:21:43.233 "num_shared_buffers": 511, 00:21:43.233 "buf_cache_size": 4294967295, 00:21:43.233 "dif_insert_or_strip": false, 00:21:43.233 "zcopy": false, 00:21:43.233 "c2h_success": false, 00:21:43.233 "sock_priority": 0, 00:21:43.233 "abort_timeout_sec": 1, 00:21:43.233 "ack_timeout": 0, 00:21:43.233 "data_wr_pool_size": 0 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "nvmf_create_subsystem", 00:21:43.233 "params": { 00:21:43.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.233 "allow_any_host": false, 00:21:43.233 "serial_number": "SPDK00000000000001", 00:21:43.233 "model_number": "SPDK bdev Controller", 00:21:43.233 "max_namespaces": 10, 00:21:43.233 "min_cntlid": 1, 00:21:43.233 "max_cntlid": 65519, 00:21:43.233 "ana_reporting": false 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "nvmf_subsystem_add_host", 00:21:43.233 "params": { 00:21:43.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.233 "host": "nqn.2016-06.io.spdk:host1", 00:21:43.233 "psk": "key0" 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "nvmf_subsystem_add_ns", 00:21:43.233 "params": { 00:21:43.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.233 "namespace": { 00:21:43.233 "nsid": 1, 00:21:43.233 "bdev_name": "malloc0", 00:21:43.233 "nguid": "B16705E310D44B968FDB18061691C80A", 00:21:43.233 "uuid": "b16705e3-10d4-4b96-8fdb-18061691c80a", 00:21:43.233 "no_auto_visible": false 00:21:43.233 } 00:21:43.233 } 00:21:43.233 }, 00:21:43.233 { 00:21:43.233 "method": "nvmf_subsystem_add_listener", 00:21:43.233 "params": { 00:21:43.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.233 "listen_address": { 00:21:43.233 "trtype": "TCP", 00:21:43.233 "adrfam": "IPv4", 00:21:43.233 "traddr": "10.0.0.2", 00:21:43.233 "trsvcid": "4420" 00:21:43.233 }, 00:21:43.233 "secure_channel": true 00:21:43.233 } 00:21:43.233 } 00:21:43.233 ] 00:21:43.233 } 00:21:43.233 ] 00:21:43.233 }' 00:21:43.233 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:43.234 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3196937 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3196937 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3196937 ']' 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:43.492 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.492 [2024-11-25 13:20:40.945432] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:43.492 [2024-11-25 13:20:40.945547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.492 [2024-11-25 13:20:41.018697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.492 [2024-11-25 13:20:41.079319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.492 [2024-11-25 13:20:41.079376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.492 [2024-11-25 13:20:41.079392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.492 [2024-11-25 13:20:41.079405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.492 [2024-11-25 13:20:41.079415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.492 [2024-11-25 13:20:41.080062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.750 [2024-11-25 13:20:41.311264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.750 [2024-11-25 13:20:41.343278] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:43.750 [2024-11-25 13:20:41.343505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3197090 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3197090 /var/tmp/bdevperf.sock 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3197090 ']' 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.316 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:44.317 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.317 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:44.317 "subsystems": [ 00:21:44.317 { 00:21:44.317 "subsystem": "keyring", 00:21:44.317 "config": [ 00:21:44.317 { 00:21:44.317 "method": "keyring_file_add_key", 00:21:44.317 "params": { 00:21:44.317 "name": "key0", 00:21:44.317 "path": "/tmp/tmp.qFRJT7xJf9" 00:21:44.317 } 00:21:44.317 } 00:21:44.317 ] 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "subsystem": "iobuf", 00:21:44.317 "config": [ 00:21:44.317 { 00:21:44.317 "method": "iobuf_set_options", 00:21:44.317 "params": { 00:21:44.317 "small_pool_count": 8192, 00:21:44.317 "large_pool_count": 1024, 00:21:44.317 "small_bufsize": 8192, 00:21:44.317 "large_bufsize": 135168, 00:21:44.317 "enable_numa": false 00:21:44.317 } 00:21:44.317 } 00:21:44.317 ] 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "subsystem": "sock", 00:21:44.317 "config": [ 00:21:44.317 { 00:21:44.317 "method": "sock_set_default_impl", 00:21:44.317 "params": { 00:21:44.317 "impl_name": "posix" 00:21:44.317 } 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "method": "sock_impl_set_options", 00:21:44.317 "params": { 00:21:44.317 "impl_name": "ssl", 00:21:44.317 "recv_buf_size": 4096, 00:21:44.317 "send_buf_size": 4096, 00:21:44.317 "enable_recv_pipe": true, 00:21:44.317 "enable_quickack": false, 00:21:44.317 "enable_placement_id": 0, 00:21:44.317 "enable_zerocopy_send_server": true, 00:21:44.317 "enable_zerocopy_send_client": false, 00:21:44.317 "zerocopy_threshold": 0, 00:21:44.317 "tls_version": 0, 00:21:44.317 "enable_ktls": false 00:21:44.317 } 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "method": "sock_impl_set_options", 00:21:44.317 "params": { 00:21:44.317 "impl_name": "posix", 00:21:44.317 "recv_buf_size": 2097152, 00:21:44.317 "send_buf_size": 2097152, 00:21:44.317 "enable_recv_pipe": true, 00:21:44.317 "enable_quickack": false, 00:21:44.317 "enable_placement_id": 0, 00:21:44.317 "enable_zerocopy_send_server": true, 00:21:44.317 "enable_zerocopy_send_client": false, 00:21:44.317 "zerocopy_threshold": 0, 00:21:44.317 "tls_version": 0, 00:21:44.317 "enable_ktls": false 00:21:44.317 } 00:21:44.317 } 00:21:44.317 ] 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "subsystem": "vmd", 00:21:44.317 "config": [] 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "subsystem": "accel", 00:21:44.317 "config": [ 00:21:44.317 { 00:21:44.317 "method": "accel_set_options", 00:21:44.317 "params": { 00:21:44.317 "small_cache_size": 128, 00:21:44.317 "large_cache_size": 16, 00:21:44.317 "task_count": 2048, 00:21:44.317 "sequence_count": 2048, 00:21:44.317 "buf_count": 2048 00:21:44.317 } 00:21:44.317 } 00:21:44.317 ] 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "subsystem": "bdev", 00:21:44.317 "config": [ 00:21:44.317 { 00:21:44.317 "method": "bdev_set_options", 00:21:44.317 "params": { 00:21:44.317 "bdev_io_pool_size": 65535, 00:21:44.317 "bdev_io_cache_size": 256, 00:21:44.317 "bdev_auto_examine": true, 00:21:44.317 "iobuf_small_cache_size": 128, 00:21:44.317 "iobuf_large_cache_size": 16 00:21:44.317 } 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "method": "bdev_raid_set_options", 00:21:44.317 "params": { 00:21:44.317 "process_window_size_kb": 1024, 00:21:44.317 "process_max_bandwidth_mb_sec": 0 00:21:44.317 } 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "method": "bdev_iscsi_set_options", 00:21:44.317 "params": { 00:21:44.317 "timeout_sec": 30 00:21:44.317 } 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "method": "bdev_nvme_set_options", 00:21:44.317 "params": { 00:21:44.317 "action_on_timeout": "none", 00:21:44.317 "timeout_us": 0, 00:21:44.317 "timeout_admin_us": 0, 00:21:44.317 "keep_alive_timeout_ms": 10000, 00:21:44.317 "arbitration_burst": 0, 00:21:44.317 "low_priority_weight": 0, 00:21:44.317 "medium_priority_weight": 0, 00:21:44.317 "high_priority_weight": 0, 00:21:44.317 "nvme_adminq_poll_period_us": 10000, 00:21:44.317 "nvme_ioq_poll_period_us": 0, 00:21:44.317 "io_queue_requests": 512, 00:21:44.317 "delay_cmd_submit": true, 00:21:44.317 "transport_retry_count": 4, 00:21:44.317 "bdev_retry_count": 3, 00:21:44.317 "transport_ack_timeout": 0, 00:21:44.317 "ctrlr_loss_timeout_sec": 0, 00:21:44.317 "reconnect_delay_sec": 0, 00:21:44.317 "fast_io_fail_timeout_sec": 0, 00:21:44.317 "disable_auto_failback": false, 00:21:44.317 "generate_uuids": false, 00:21:44.317 "transport_tos": 0, 00:21:44.317 "nvme_error_stat": false, 00:21:44.317 "rdma_srq_size": 0, 00:21:44.317 "io_path_stat": false, 00:21:44.317 "allow_accel_sequence": false, 00:21:44.317 "rdma_max_cq_size": 0, 00:21:44.317 "rdma_cm_event_timeout_ms": 0, 00:21:44.317 "dhchap_digests": [ 00:21:44.317 "sha256", 00:21:44.317 "sha384", 00:21:44.317 "sha512" 00:21:44.317 ], 00:21:44.317 "dhchap_dhgroups": [ 00:21:44.317 "null", 00:21:44.317 "ffdhe2048", 00:21:44.317 "ffdhe3072", 00:21:44.317 "ffdhe4096", 00:21:44.317 "ffdhe6144", 00:21:44.317 "ffdhe8192" 00:21:44.317 ] 00:21:44.317 } 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "method": "bdev_nvme_attach_controller", 00:21:44.317 "params": { 00:21:44.317 "name": "TLSTEST", 00:21:44.317 "trtype": "TCP", 00:21:44.317 "adrfam": "IPv4", 00:21:44.317 "traddr": "10.0.0.2", 00:21:44.317 "trsvcid": "4420", 00:21:44.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.317 "prchk_reftag": false, 00:21:44.317 "prchk_guard": false, 00:21:44.317 "ctrlr_loss_timeout_sec": 0, 00:21:44.317 "reconnect_delay_sec": 0, 00:21:44.317 "fast_io_fail_timeout_sec": 0, 00:21:44.317 "psk": "key0", 00:21:44.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.317 "hdgst": false, 00:21:44.317 "ddgst": false, 00:21:44.317 "multipath": "multipath" 00:21:44.317 } 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "method": "bdev_nvme_set_hotplug", 00:21:44.317 "params": { 00:21:44.317 "period_us": 100000, 00:21:44.317 "enable": false 00:21:44.317 } 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "method": "bdev_wait_for_examine" 00:21:44.317 } 00:21:44.317 ] 00:21:44.317 }, 00:21:44.317 { 00:21:44.317 "subsystem": "nbd", 00:21:44.317 "config": [] 00:21:44.317 } 00:21:44.317 ] 00:21:44.317 }' 00:21:44.317 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.317 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.317 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.575 [2024-11-25 13:20:41.999673] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:44.575 [2024-11-25 13:20:41.999764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197090 ] 00:21:44.575 [2024-11-25 13:20:42.065523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.575 [2024-11-25 13:20:42.123118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.834 [2024-11-25 13:20:42.303872] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.834 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.834 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:44.834 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:45.093 Running I/O for 10 seconds... 00:21:46.961 3189.00 IOPS, 12.46 MiB/s [2024-11-25T12:20:45.553Z] 3313.00 IOPS, 12.94 MiB/s [2024-11-25T12:20:46.926Z] 3312.67 IOPS, 12.94 MiB/s [2024-11-25T12:20:47.861Z] 3360.25 IOPS, 13.13 MiB/s [2024-11-25T12:20:48.795Z] 3369.00 IOPS, 13.16 MiB/s [2024-11-25T12:20:49.726Z] 3384.50 IOPS, 13.22 MiB/s [2024-11-25T12:20:50.656Z] 3379.86 IOPS, 13.20 MiB/s [2024-11-25T12:20:51.638Z] 3379.75 IOPS, 13.20 MiB/s [2024-11-25T12:20:52.568Z] 3368.67 IOPS, 13.16 MiB/s [2024-11-25T12:20:52.826Z] 3377.90 IOPS, 13.19 MiB/s 00:21:55.167 Latency(us) 00:21:55.167 [2024-11-25T12:20:52.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.167 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:55.167 Verification LBA range: start 0x0 length 0x2000 00:21:55.167 TLSTESTn1 : 10.03 3379.76 13.20 0.00 0.00 37791.19 6359.42 38253.61 00:21:55.167 [2024-11-25T12:20:52.826Z] =================================================================================================================== 00:21:55.167 [2024-11-25T12:20:52.826Z] Total : 3379.76 13.20 0.00 0.00 37791.19 6359.42 38253.61 00:21:55.167 { 00:21:55.167 "results": [ 00:21:55.167 { 00:21:55.167 "job": "TLSTESTn1", 00:21:55.167 "core_mask": "0x4", 00:21:55.167 "workload": "verify", 00:21:55.167 "status": "finished", 00:21:55.167 "verify_range": { 00:21:55.167 "start": 0, 00:21:55.167 "length": 8192 00:21:55.167 }, 00:21:55.167 "queue_depth": 128, 00:21:55.167 "io_size": 4096, 00:21:55.167 "runtime": 10.031788, 00:21:55.167 "iops": 3379.756430259491, 00:21:55.167 "mibps": 13.202173555701137, 00:21:55.167 "io_failed": 0, 00:21:55.167 "io_timeout": 0, 00:21:55.167 "avg_latency_us": 37791.1860118086, 00:21:55.167 "min_latency_us": 6359.419259259259, 00:21:55.167 "max_latency_us": 38253.60592592593 00:21:55.167 } 00:21:55.167 ], 00:21:55.167 "core_count": 1 00:21:55.167 } 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3197090 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3197090 ']' 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3197090 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3197090 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3197090' 00:21:55.167 killing process with pid 3197090 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3197090 00:21:55.167 Received shutdown signal, test time was about 10.000000 seconds 00:21:55.167 00:21:55.167 Latency(us) 00:21:55.167 [2024-11-25T12:20:52.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.167 [2024-11-25T12:20:52.826Z] =================================================================================================================== 00:21:55.167 [2024-11-25T12:20:52.826Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.167 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3197090 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3196937 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3196937 ']' 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3196937 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196937 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196937' 00:21:55.426 killing process with pid 3196937 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3196937 00:21:55.426 13:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3196937 00:21:55.685 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:55.685 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.685 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.685 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.685 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3198417 00:21:55.685 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:55.685 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3198417 00:21:55.685 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3198417 ']' 00:21:55.686 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.686 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.686 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.686 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.686 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.686 [2024-11-25 13:20:53.158979] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:55.686 [2024-11-25 13:20:53.159078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.686 [2024-11-25 13:20:53.232480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.686 [2024-11-25 13:20:53.290713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.686 [2024-11-25 13:20:53.290770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.686 [2024-11-25 13:20:53.290783] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.686 [2024-11-25 13:20:53.290795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.686 [2024-11-25 13:20:53.290804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.686 [2024-11-25 13:20:53.291421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.945 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.945 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:55.945 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.945 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.945 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.946 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.946 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.qFRJT7xJf9 00:21:55.946 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qFRJT7xJf9 00:21:55.946 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:56.204 [2024-11-25 13:20:53.697173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.204 13:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:56.461 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:56.719 [2024-11-25 13:20:54.322894] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.719 [2024-11-25 13:20:54.323130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.719 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:56.977 malloc0 00:21:56.977 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:57.235 13:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:21:57.493 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3198713 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3198713 /var/tmp/bdevperf.sock 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3198713 ']' 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.751 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.009 [2024-11-25 13:20:55.450162] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:21:58.009 [2024-11-25 13:20:55.450257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198713 ] 00:21:58.009 [2024-11-25 13:20:55.516322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.009 [2024-11-25 13:20:55.573296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.267 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.267 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:58.267 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:21:58.525 13:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:58.783 [2024-11-25 13:20:56.203636] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.783 nvme0n1 00:21:58.783 13:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.783 Running I/O for 1 seconds... 00:22:00.154 3449.00 IOPS, 13.47 MiB/s 00:22:00.154 Latency(us) 00:22:00.154 [2024-11-25T12:20:57.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.154 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:00.154 Verification LBA range: start 0x0 length 0x2000 00:22:00.154 nvme0n1 : 1.02 3495.98 13.66 0.00 0.00 36239.26 9175.04 30098.01 00:22:00.154 [2024-11-25T12:20:57.813Z] =================================================================================================================== 00:22:00.154 [2024-11-25T12:20:57.813Z] Total : 3495.98 13.66 0.00 0.00 36239.26 9175.04 30098.01 00:22:00.154 { 00:22:00.154 "results": [ 00:22:00.154 { 00:22:00.154 "job": "nvme0n1", 00:22:00.154 "core_mask": "0x2", 00:22:00.154 "workload": "verify", 00:22:00.154 "status": "finished", 00:22:00.154 "verify_range": { 00:22:00.154 "start": 0, 00:22:00.154 "length": 8192 00:22:00.154 }, 00:22:00.154 "queue_depth": 128, 00:22:00.154 "io_size": 4096, 00:22:00.154 "runtime": 1.023174, 00:22:00.154 "iops": 3495.9840652714006, 00:22:00.154 "mibps": 13.656187754966409, 00:22:00.154 "io_failed": 0, 00:22:00.154 "io_timeout": 0, 00:22:00.154 "avg_latency_us": 36239.25895608776, 00:22:00.154 "min_latency_us": 9175.04, 00:22:00.154 "max_latency_us": 30098.014814814815 00:22:00.154 } 00:22:00.154 ], 00:22:00.154 "core_count": 1 00:22:00.154 } 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3198713 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3198713 ']' 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3198713 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198713 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198713' 00:22:00.154 killing process with pid 3198713 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3198713 00:22:00.154 Received shutdown signal, test time was about 1.000000 seconds 00:22:00.154 00:22:00.154 Latency(us) 00:22:00.154 [2024-11-25T12:20:57.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.154 [2024-11-25T12:20:57.813Z] =================================================================================================================== 00:22:00.154 [2024-11-25T12:20:57.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3198713 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3198417 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3198417 ']' 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3198417 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198417 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198417' 00:22:00.154 killing process with pid 3198417 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3198417 00:22:00.154 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3198417 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3198992 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3198992 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3198992 ']' 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.412 13:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.412 [2024-11-25 13:20:58.042662] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:00.412 [2024-11-25 13:20:58.042764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.671 [2024-11-25 13:20:58.116818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.671 [2024-11-25 13:20:58.174199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.671 [2024-11-25 13:20:58.174251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.671 [2024-11-25 13:20:58.174273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.671 [2024-11-25 13:20:58.174284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.671 [2024-11-25 13:20:58.174294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.671 [2024-11-25 13:20:58.174904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.671 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.671 [2024-11-25 13:20:58.317230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.929 malloc0 00:22:00.929 [2024-11-25 13:20:58.349839] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.929 [2024-11-25 13:20:58.350066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3199127 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3199127 /var/tmp/bdevperf.sock 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3199127 ']' 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.930 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.930 [2024-11-25 13:20:58.422139] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:00.930 [2024-11-25 13:20:58.422199] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199127 ] 00:22:00.930 [2024-11-25 13:20:58.487706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.930 [2024-11-25 13:20:58.544972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.188 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.188 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:01.188 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qFRJT7xJf9 00:22:01.445 13:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:01.702 [2024-11-25 13:20:59.230127] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.702 nvme0n1 00:22:01.702 13:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.960 Running I/O for 1 seconds... 00:22:02.894 3387.00 IOPS, 13.23 MiB/s 00:22:02.894 Latency(us) 00:22:02.894 [2024-11-25T12:21:00.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.894 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:02.894 Verification LBA range: start 0x0 length 0x2000 00:22:02.894 nvme0n1 : 1.02 3441.56 13.44 0.00 0.00 36829.48 8689.59 54758.97 00:22:02.894 [2024-11-25T12:21:00.553Z] =================================================================================================================== 00:22:02.894 [2024-11-25T12:21:00.553Z] Total : 3441.56 13.44 0.00 0.00 36829.48 8689.59 54758.97 00:22:02.894 { 00:22:02.894 "results": [ 00:22:02.894 { 00:22:02.894 "job": "nvme0n1", 00:22:02.894 "core_mask": "0x2", 00:22:02.894 "workload": "verify", 00:22:02.894 "status": "finished", 00:22:02.894 "verify_range": { 00:22:02.894 "start": 0, 00:22:02.894 "length": 8192 00:22:02.894 }, 00:22:02.894 "queue_depth": 128, 00:22:02.894 "io_size": 4096, 00:22:02.894 "runtime": 1.021339, 00:22:02.894 "iops": 3441.5605396445253, 00:22:02.894 "mibps": 13.443595857986427, 00:22:02.894 "io_failed": 0, 00:22:02.894 "io_timeout": 0, 00:22:02.894 "avg_latency_us": 36829.47802033613, 00:22:02.894 "min_latency_us": 8689.588148148148, 00:22:02.894 "max_latency_us": 54758.96888888889 00:22:02.894 } 00:22:02.894 ], 00:22:02.894 "core_count": 1 00:22:02.894 } 00:22:02.894 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:02.894 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.894 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.152 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.152 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:03.152 "subsystems": [ 00:22:03.152 { 00:22:03.152 "subsystem": "keyring", 00:22:03.152 "config": [ 00:22:03.152 { 00:22:03.152 "method": "keyring_file_add_key", 00:22:03.152 "params": { 00:22:03.152 "name": "key0", 00:22:03.152 "path": "/tmp/tmp.qFRJT7xJf9" 00:22:03.152 } 00:22:03.152 } 00:22:03.152 ] 00:22:03.152 }, 00:22:03.152 { 00:22:03.152 "subsystem": "iobuf", 00:22:03.152 "config": [ 00:22:03.152 { 00:22:03.152 "method": "iobuf_set_options", 00:22:03.152 "params": { 00:22:03.152 "small_pool_count": 8192, 00:22:03.152 "large_pool_count": 1024, 00:22:03.152 "small_bufsize": 8192, 00:22:03.152 "large_bufsize": 135168, 00:22:03.152 "enable_numa": false 00:22:03.152 } 00:22:03.152 } 00:22:03.152 ] 00:22:03.152 }, 00:22:03.152 { 00:22:03.152 "subsystem": "sock", 00:22:03.152 "config": [ 00:22:03.152 { 00:22:03.152 "method": "sock_set_default_impl", 00:22:03.152 "params": { 00:22:03.152 "impl_name": "posix" 00:22:03.152 } 00:22:03.152 }, 00:22:03.152 { 00:22:03.152 "method": "sock_impl_set_options", 00:22:03.152 "params": { 00:22:03.152 "impl_name": "ssl", 00:22:03.152 "recv_buf_size": 4096, 00:22:03.152 "send_buf_size": 4096, 00:22:03.152 "enable_recv_pipe": true, 00:22:03.152 "enable_quickack": false, 00:22:03.152 "enable_placement_id": 0, 00:22:03.152 "enable_zerocopy_send_server": true, 00:22:03.152 "enable_zerocopy_send_client": false, 00:22:03.152 "zerocopy_threshold": 0, 00:22:03.152 "tls_version": 0, 00:22:03.152 "enable_ktls": false 00:22:03.152 } 00:22:03.152 }, 00:22:03.152 { 00:22:03.152 "method": "sock_impl_set_options", 00:22:03.152 "params": { 00:22:03.152 "impl_name": "posix", 00:22:03.152 "recv_buf_size": 2097152, 00:22:03.152 "send_buf_size": 2097152, 00:22:03.152 "enable_recv_pipe": true, 00:22:03.152 "enable_quickack": false, 00:22:03.152 "enable_placement_id": 0, 00:22:03.152 "enable_zerocopy_send_server": true, 00:22:03.152 "enable_zerocopy_send_client": false, 00:22:03.152 "zerocopy_threshold": 0, 00:22:03.152 "tls_version": 0, 00:22:03.152 "enable_ktls": false 00:22:03.152 } 00:22:03.152 } 00:22:03.152 ] 00:22:03.152 }, 00:22:03.152 { 00:22:03.152 "subsystem": "vmd", 00:22:03.152 "config": [] 00:22:03.152 }, 00:22:03.152 { 00:22:03.152 "subsystem": "accel", 00:22:03.152 "config": [ 00:22:03.152 { 00:22:03.152 "method": "accel_set_options", 00:22:03.152 "params": { 00:22:03.152 "small_cache_size": 128, 00:22:03.152 "large_cache_size": 16, 00:22:03.152 "task_count": 2048, 00:22:03.152 "sequence_count": 2048, 00:22:03.152 "buf_count": 2048 00:22:03.152 } 00:22:03.152 } 00:22:03.152 ] 00:22:03.152 }, 00:22:03.152 { 00:22:03.152 "subsystem": "bdev", 00:22:03.152 "config": [ 00:22:03.152 { 00:22:03.152 "method": "bdev_set_options", 00:22:03.152 "params": { 00:22:03.152 "bdev_io_pool_size": 65535, 00:22:03.152 "bdev_io_cache_size": 256, 00:22:03.153 "bdev_auto_examine": true, 00:22:03.153 "iobuf_small_cache_size": 128, 00:22:03.153 "iobuf_large_cache_size": 16 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "bdev_raid_set_options", 00:22:03.153 "params": { 00:22:03.153 "process_window_size_kb": 1024, 00:22:03.153 "process_max_bandwidth_mb_sec": 0 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "bdev_iscsi_set_options", 00:22:03.153 "params": { 00:22:03.153 "timeout_sec": 30 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "bdev_nvme_set_options", 00:22:03.153 "params": { 00:22:03.153 "action_on_timeout": "none", 00:22:03.153 "timeout_us": 0, 00:22:03.153 "timeout_admin_us": 0, 00:22:03.153 "keep_alive_timeout_ms": 10000, 00:22:03.153 "arbitration_burst": 0, 00:22:03.153 "low_priority_weight": 0, 00:22:03.153 "medium_priority_weight": 0, 00:22:03.153 "high_priority_weight": 0, 00:22:03.153 "nvme_adminq_poll_period_us": 10000, 00:22:03.153 "nvme_ioq_poll_period_us": 0, 00:22:03.153 "io_queue_requests": 0, 00:22:03.153 "delay_cmd_submit": true, 00:22:03.153 "transport_retry_count": 4, 00:22:03.153 "bdev_retry_count": 3, 00:22:03.153 "transport_ack_timeout": 0, 00:22:03.153 "ctrlr_loss_timeout_sec": 0, 00:22:03.153 "reconnect_delay_sec": 0, 00:22:03.153 "fast_io_fail_timeout_sec": 0, 00:22:03.153 "disable_auto_failback": false, 00:22:03.153 "generate_uuids": false, 00:22:03.153 "transport_tos": 0, 00:22:03.153 "nvme_error_stat": false, 00:22:03.153 "rdma_srq_size": 0, 00:22:03.153 "io_path_stat": false, 00:22:03.153 "allow_accel_sequence": false, 00:22:03.153 "rdma_max_cq_size": 0, 00:22:03.153 "rdma_cm_event_timeout_ms": 0, 00:22:03.153 "dhchap_digests": [ 00:22:03.153 "sha256", 00:22:03.153 "sha384", 00:22:03.153 "sha512" 00:22:03.153 ], 00:22:03.153 "dhchap_dhgroups": [ 00:22:03.153 "null", 00:22:03.153 "ffdhe2048", 00:22:03.153 "ffdhe3072", 00:22:03.153 "ffdhe4096", 00:22:03.153 "ffdhe6144", 00:22:03.153 "ffdhe8192" 00:22:03.153 ] 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "bdev_nvme_set_hotplug", 00:22:03.153 "params": { 00:22:03.153 "period_us": 100000, 00:22:03.153 "enable": false 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "bdev_malloc_create", 00:22:03.153 "params": { 00:22:03.153 "name": "malloc0", 00:22:03.153 "num_blocks": 8192, 00:22:03.153 "block_size": 4096, 00:22:03.153 "physical_block_size": 4096, 00:22:03.153 "uuid": "059283a4-b19c-4317-917d-71dfd9e6d6e8", 00:22:03.153 "optimal_io_boundary": 0, 00:22:03.153 "md_size": 0, 00:22:03.153 "dif_type": 0, 00:22:03.153 "dif_is_head_of_md": false, 00:22:03.153 "dif_pi_format": 0 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "bdev_wait_for_examine" 00:22:03.153 } 00:22:03.153 ] 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "subsystem": "nbd", 00:22:03.153 "config": [] 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "subsystem": "scheduler", 00:22:03.153 "config": [ 00:22:03.153 { 00:22:03.153 "method": "framework_set_scheduler", 00:22:03.153 "params": { 00:22:03.153 "name": "static" 00:22:03.153 } 00:22:03.153 } 00:22:03.153 ] 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "subsystem": "nvmf", 00:22:03.153 "config": [ 00:22:03.153 { 00:22:03.153 "method": "nvmf_set_config", 00:22:03.153 "params": { 00:22:03.153 "discovery_filter": "match_any", 00:22:03.153 "admin_cmd_passthru": { 00:22:03.153 "identify_ctrlr": false 00:22:03.153 }, 00:22:03.153 "dhchap_digests": [ 00:22:03.153 "sha256", 00:22:03.153 "sha384", 00:22:03.153 "sha512" 00:22:03.153 ], 00:22:03.153 "dhchap_dhgroups": [ 00:22:03.153 "null", 00:22:03.153 "ffdhe2048", 00:22:03.153 "ffdhe3072", 00:22:03.153 "ffdhe4096", 00:22:03.153 "ffdhe6144", 00:22:03.153 "ffdhe8192" 00:22:03.153 ] 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "nvmf_set_max_subsystems", 00:22:03.153 "params": { 00:22:03.153 "max_subsystems": 1024 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "nvmf_set_crdt", 00:22:03.153 "params": { 00:22:03.153 "crdt1": 0, 00:22:03.153 "crdt2": 0, 00:22:03.153 "crdt3": 0 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "nvmf_create_transport", 00:22:03.153 "params": { 00:22:03.153 "trtype": "TCP", 00:22:03.153 "max_queue_depth": 128, 00:22:03.153 "max_io_qpairs_per_ctrlr": 127, 00:22:03.153 "in_capsule_data_size": 4096, 00:22:03.153 "max_io_size": 131072, 00:22:03.153 "io_unit_size": 131072, 00:22:03.153 "max_aq_depth": 128, 00:22:03.153 "num_shared_buffers": 511, 00:22:03.153 "buf_cache_size": 4294967295, 00:22:03.153 "dif_insert_or_strip": false, 00:22:03.153 "zcopy": false, 00:22:03.153 "c2h_success": false, 00:22:03.153 "sock_priority": 0, 00:22:03.153 "abort_timeout_sec": 1, 00:22:03.153 "ack_timeout": 0, 00:22:03.153 "data_wr_pool_size": 0 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "nvmf_create_subsystem", 00:22:03.153 "params": { 00:22:03.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.153 "allow_any_host": false, 00:22:03.153 "serial_number": "00000000000000000000", 00:22:03.153 "model_number": "SPDK bdev Controller", 00:22:03.153 "max_namespaces": 32, 00:22:03.153 "min_cntlid": 1, 00:22:03.153 "max_cntlid": 65519, 00:22:03.153 "ana_reporting": false 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "nvmf_subsystem_add_host", 00:22:03.153 "params": { 00:22:03.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.153 "host": "nqn.2016-06.io.spdk:host1", 00:22:03.153 "psk": "key0" 00:22:03.153 } 00:22:03.153 }, 00:22:03.153 { 00:22:03.153 "method": "nvmf_subsystem_add_ns", 00:22:03.154 "params": { 00:22:03.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.154 "namespace": { 00:22:03.154 "nsid": 1, 00:22:03.154 "bdev_name": "malloc0", 00:22:03.154 "nguid": "059283A4B19C4317917D71DFD9E6D6E8", 00:22:03.154 "uuid": "059283a4-b19c-4317-917d-71dfd9e6d6e8", 00:22:03.154 "no_auto_visible": false 00:22:03.154 } 00:22:03.154 } 00:22:03.154 }, 00:22:03.154 { 00:22:03.154 "method": "nvmf_subsystem_add_listener", 00:22:03.154 "params": { 00:22:03.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.154 "listen_address": { 00:22:03.154 "trtype": "TCP", 00:22:03.154 "adrfam": "IPv4", 00:22:03.154 "traddr": "10.0.0.2", 00:22:03.154 "trsvcid": "4420" 00:22:03.154 }, 00:22:03.154 "secure_channel": false, 00:22:03.154 "sock_impl": "ssl" 00:22:03.154 } 00:22:03.154 } 00:22:03.154 ] 00:22:03.154 } 00:22:03.154 ] 00:22:03.154 }' 00:22:03.154 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:03.412 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:03.412 "subsystems": [ 00:22:03.412 { 00:22:03.412 "subsystem": "keyring", 00:22:03.412 "config": [ 00:22:03.412 { 00:22:03.412 "method": "keyring_file_add_key", 00:22:03.412 "params": { 00:22:03.412 "name": "key0", 00:22:03.412 "path": "/tmp/tmp.qFRJT7xJf9" 00:22:03.412 } 00:22:03.412 } 00:22:03.412 ] 00:22:03.412 }, 00:22:03.412 { 00:22:03.412 "subsystem": "iobuf", 00:22:03.412 "config": [ 00:22:03.412 { 00:22:03.412 "method": "iobuf_set_options", 00:22:03.412 "params": { 00:22:03.412 "small_pool_count": 8192, 00:22:03.412 "large_pool_count": 1024, 00:22:03.412 "small_bufsize": 8192, 00:22:03.412 "large_bufsize": 135168, 00:22:03.412 "enable_numa": false 00:22:03.412 } 00:22:03.412 } 00:22:03.412 ] 00:22:03.412 }, 00:22:03.412 { 00:22:03.412 "subsystem": "sock", 00:22:03.412 "config": [ 00:22:03.412 { 00:22:03.412 "method": "sock_set_default_impl", 00:22:03.412 "params": { 00:22:03.412 "impl_name": "posix" 00:22:03.412 } 00:22:03.412 }, 00:22:03.412 { 00:22:03.412 "method": "sock_impl_set_options", 00:22:03.412 "params": { 00:22:03.412 "impl_name": "ssl", 00:22:03.412 "recv_buf_size": 4096, 00:22:03.412 "send_buf_size": 4096, 00:22:03.413 "enable_recv_pipe": true, 00:22:03.413 "enable_quickack": false, 00:22:03.413 "enable_placement_id": 0, 00:22:03.413 "enable_zerocopy_send_server": true, 00:22:03.413 "enable_zerocopy_send_client": false, 00:22:03.413 "zerocopy_threshold": 0, 00:22:03.413 "tls_version": 0, 00:22:03.413 "enable_ktls": false 00:22:03.413 } 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "method": "sock_impl_set_options", 00:22:03.413 "params": { 00:22:03.413 "impl_name": "posix", 00:22:03.413 "recv_buf_size": 2097152, 00:22:03.413 "send_buf_size": 2097152, 00:22:03.413 "enable_recv_pipe": true, 00:22:03.413 "enable_quickack": false, 00:22:03.413 "enable_placement_id": 0, 00:22:03.413 "enable_zerocopy_send_server": true, 00:22:03.413 "enable_zerocopy_send_client": false, 00:22:03.413 "zerocopy_threshold": 0, 00:22:03.413 "tls_version": 0, 00:22:03.413 "enable_ktls": false 00:22:03.413 } 00:22:03.413 } 00:22:03.413 ] 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "subsystem": "vmd", 00:22:03.413 "config": [] 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "subsystem": "accel", 00:22:03.413 "config": [ 00:22:03.413 { 00:22:03.413 "method": "accel_set_options", 00:22:03.413 "params": { 00:22:03.413 "small_cache_size": 128, 00:22:03.413 "large_cache_size": 16, 00:22:03.413 "task_count": 2048, 00:22:03.413 "sequence_count": 2048, 00:22:03.413 "buf_count": 2048 00:22:03.413 } 00:22:03.413 } 00:22:03.413 ] 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "subsystem": "bdev", 00:22:03.413 "config": [ 00:22:03.413 { 00:22:03.413 "method": "bdev_set_options", 00:22:03.413 "params": { 00:22:03.413 "bdev_io_pool_size": 65535, 00:22:03.413 "bdev_io_cache_size": 256, 00:22:03.413 "bdev_auto_examine": true, 00:22:03.413 "iobuf_small_cache_size": 128, 00:22:03.413 "iobuf_large_cache_size": 16 00:22:03.413 } 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "method": "bdev_raid_set_options", 00:22:03.413 "params": { 00:22:03.413 "process_window_size_kb": 1024, 00:22:03.413 "process_max_bandwidth_mb_sec": 0 00:22:03.413 } 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "method": "bdev_iscsi_set_options", 00:22:03.413 "params": { 00:22:03.413 "timeout_sec": 30 00:22:03.413 } 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "method": "bdev_nvme_set_options", 00:22:03.413 "params": { 00:22:03.413 "action_on_timeout": "none", 00:22:03.413 "timeout_us": 0, 00:22:03.413 "timeout_admin_us": 0, 00:22:03.413 "keep_alive_timeout_ms": 10000, 00:22:03.413 "arbitration_burst": 0, 00:22:03.413 "low_priority_weight": 0, 00:22:03.413 "medium_priority_weight": 0, 00:22:03.413 "high_priority_weight": 0, 00:22:03.413 "nvme_adminq_poll_period_us": 10000, 00:22:03.413 "nvme_ioq_poll_period_us": 0, 00:22:03.413 "io_queue_requests": 512, 00:22:03.413 "delay_cmd_submit": true, 00:22:03.413 "transport_retry_count": 4, 00:22:03.413 "bdev_retry_count": 3, 00:22:03.413 "transport_ack_timeout": 0, 00:22:03.413 "ctrlr_loss_timeout_sec": 0, 00:22:03.413 "reconnect_delay_sec": 0, 00:22:03.413 "fast_io_fail_timeout_sec": 0, 00:22:03.413 "disable_auto_failback": false, 00:22:03.413 "generate_uuids": false, 00:22:03.413 "transport_tos": 0, 00:22:03.413 "nvme_error_stat": false, 00:22:03.413 "rdma_srq_size": 0, 00:22:03.413 "io_path_stat": false, 00:22:03.413 "allow_accel_sequence": false, 00:22:03.413 "rdma_max_cq_size": 0, 00:22:03.413 "rdma_cm_event_timeout_ms": 0, 00:22:03.413 "dhchap_digests": [ 00:22:03.413 "sha256", 00:22:03.413 "sha384", 00:22:03.413 "sha512" 00:22:03.413 ], 00:22:03.413 "dhchap_dhgroups": [ 00:22:03.413 "null", 00:22:03.413 "ffdhe2048", 00:22:03.413 "ffdhe3072", 00:22:03.413 "ffdhe4096", 00:22:03.413 "ffdhe6144", 00:22:03.413 "ffdhe8192" 00:22:03.413 ] 00:22:03.413 } 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "method": "bdev_nvme_attach_controller", 00:22:03.413 "params": { 00:22:03.413 "name": "nvme0", 00:22:03.413 "trtype": "TCP", 00:22:03.413 "adrfam": "IPv4", 00:22:03.413 "traddr": "10.0.0.2", 00:22:03.413 "trsvcid": "4420", 00:22:03.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.413 "prchk_reftag": false, 00:22:03.413 "prchk_guard": false, 00:22:03.413 "ctrlr_loss_timeout_sec": 0, 00:22:03.413 "reconnect_delay_sec": 0, 00:22:03.413 "fast_io_fail_timeout_sec": 0, 00:22:03.413 "psk": "key0", 00:22:03.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.413 "hdgst": false, 00:22:03.413 "ddgst": false, 00:22:03.413 "multipath": "multipath" 00:22:03.413 } 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "method": "bdev_nvme_set_hotplug", 00:22:03.413 "params": { 00:22:03.413 "period_us": 100000, 00:22:03.413 "enable": false 00:22:03.413 } 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "method": "bdev_enable_histogram", 00:22:03.413 "params": { 00:22:03.413 "name": "nvme0n1", 00:22:03.413 "enable": true 00:22:03.413 } 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "method": "bdev_wait_for_examine" 00:22:03.413 } 00:22:03.413 ] 00:22:03.413 }, 00:22:03.413 { 00:22:03.413 "subsystem": "nbd", 00:22:03.413 "config": [] 00:22:03.413 } 00:22:03.413 ] 00:22:03.413 }' 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3199127 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3199127 ']' 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3199127 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199127 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199127' 00:22:03.413 killing process with pid 3199127 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3199127 00:22:03.413 Received shutdown signal, test time was about 1.000000 seconds 00:22:03.413 00:22:03.413 Latency(us) 00:22:03.413 [2024-11-25T12:21:01.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.413 [2024-11-25T12:21:01.072Z] =================================================================================================================== 00:22:03.413 [2024-11-25T12:21:01.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.413 13:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3199127 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3198992 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3198992 ']' 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3198992 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198992 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198992' 00:22:03.671 killing process with pid 3198992 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3198992 00:22:03.671 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3198992 00:22:03.930 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:03.930 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:03.930 "subsystems": [ 00:22:03.930 { 00:22:03.930 "subsystem": "keyring", 00:22:03.930 "config": [ 00:22:03.930 { 00:22:03.930 "method": "keyring_file_add_key", 00:22:03.930 "params": { 00:22:03.930 "name": "key0", 00:22:03.930 "path": "/tmp/tmp.qFRJT7xJf9" 00:22:03.930 } 00:22:03.930 } 00:22:03.930 ] 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "subsystem": "iobuf", 00:22:03.930 "config": [ 00:22:03.930 { 00:22:03.930 "method": "iobuf_set_options", 00:22:03.930 "params": { 00:22:03.930 "small_pool_count": 8192, 00:22:03.930 "large_pool_count": 1024, 00:22:03.930 "small_bufsize": 8192, 00:22:03.930 "large_bufsize": 135168, 00:22:03.930 "enable_numa": false 00:22:03.930 } 00:22:03.930 } 00:22:03.930 ] 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "subsystem": "sock", 00:22:03.930 "config": [ 00:22:03.930 { 00:22:03.930 "method": "sock_set_default_impl", 00:22:03.930 "params": { 00:22:03.930 "impl_name": "posix" 00:22:03.930 } 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "method": "sock_impl_set_options", 00:22:03.930 "params": { 00:22:03.930 "impl_name": "ssl", 00:22:03.930 "recv_buf_size": 4096, 00:22:03.930 "send_buf_size": 4096, 00:22:03.930 "enable_recv_pipe": true, 00:22:03.930 "enable_quickack": false, 00:22:03.930 "enable_placement_id": 0, 00:22:03.930 "enable_zerocopy_send_server": true, 00:22:03.930 "enable_zerocopy_send_client": false, 00:22:03.930 "zerocopy_threshold": 0, 00:22:03.930 "tls_version": 0, 00:22:03.930 "enable_ktls": false 00:22:03.930 } 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "method": "sock_impl_set_options", 00:22:03.930 "params": { 00:22:03.930 "impl_name": "posix", 00:22:03.930 "recv_buf_size": 2097152, 00:22:03.930 "send_buf_size": 2097152, 00:22:03.930 "enable_recv_pipe": true, 00:22:03.930 "enable_quickack": false, 00:22:03.930 "enable_placement_id": 0, 00:22:03.930 "enable_zerocopy_send_server": true, 00:22:03.930 "enable_zerocopy_send_client": false, 00:22:03.930 "zerocopy_threshold": 0, 00:22:03.930 "tls_version": 0, 00:22:03.930 "enable_ktls": false 00:22:03.930 } 00:22:03.930 } 00:22:03.930 ] 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "subsystem": "vmd", 00:22:03.930 "config": [] 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "subsystem": "accel", 00:22:03.930 "config": [ 00:22:03.930 { 00:22:03.930 "method": "accel_set_options", 00:22:03.930 "params": { 00:22:03.930 "small_cache_size": 128, 00:22:03.930 "large_cache_size": 16, 00:22:03.930 "task_count": 2048, 00:22:03.930 "sequence_count": 2048, 00:22:03.930 "buf_count": 2048 00:22:03.930 } 00:22:03.930 } 00:22:03.930 ] 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "subsystem": "bdev", 00:22:03.930 "config": [ 00:22:03.930 { 00:22:03.930 "method": "bdev_set_options", 00:22:03.930 "params": { 00:22:03.930 "bdev_io_pool_size": 65535, 00:22:03.930 "bdev_io_cache_size": 256, 00:22:03.930 "bdev_auto_examine": true, 00:22:03.930 "iobuf_small_cache_size": 128, 00:22:03.930 "iobuf_large_cache_size": 16 00:22:03.930 } 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "method": "bdev_raid_set_options", 00:22:03.930 "params": { 00:22:03.930 "process_window_size_kb": 1024, 00:22:03.930 "process_max_bandwidth_mb_sec": 0 00:22:03.930 } 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "method": "bdev_iscsi_set_options", 00:22:03.930 "params": { 00:22:03.930 "timeout_sec": 30 00:22:03.930 } 00:22:03.930 }, 00:22:03.930 { 00:22:03.930 "method": "bdev_nvme_set_options", 00:22:03.930 "params": { 00:22:03.930 "action_on_timeout": "none", 00:22:03.930 "timeout_us": 0, 00:22:03.930 "timeout_admin_us": 0, 00:22:03.930 "keep_alive_timeout_ms": 10000, 00:22:03.930 "arbitration_burst": 0, 00:22:03.930 "low_priority_weight": 0, 00:22:03.930 "medium_priority_weight": 0, 00:22:03.930 "high_priority_weight": 0, 00:22:03.930 "nvme_adminq_poll_period_us": 10000, 00:22:03.930 "nvme_ioq_poll_period_us": 0, 00:22:03.930 "io_queue_requests": 0, 00:22:03.930 "delay_cmd_submit": true, 00:22:03.930 "transport_retry_count": 4, 00:22:03.930 "bdev_retry_count": 3, 00:22:03.930 "transport_ack_timeout": 0, 00:22:03.930 "ctrlr_loss_timeout_sec": 0, 00:22:03.930 "reconnect_delay_sec": 0, 00:22:03.930 "fast_io_fail_timeout_sec": 0, 00:22:03.930 "disable_auto_failback": false, 00:22:03.930 "generate_uuids": false, 00:22:03.930 "transport_tos": 0, 00:22:03.930 "nvme_error_stat": false, 00:22:03.930 "rdma_srq_size": 0, 00:22:03.930 "io_path_stat": false, 00:22:03.931 "allow_accel_sequence": false, 00:22:03.931 "rdma_max_cq_size": 0, 00:22:03.931 "rdma_cm_event_timeout_ms": 0, 00:22:03.931 "dhchap_digests": [ 00:22:03.931 "sha256", 00:22:03.931 "sha384", 00:22:03.931 "sha512" 00:22:03.931 ], 00:22:03.931 "dhchap_dhgroups": [ 00:22:03.931 "null", 00:22:03.931 "ffdhe2048", 00:22:03.931 "ffdhe3072", 00:22:03.931 "ffdhe4096", 00:22:03.931 "ffdhe6144", 00:22:03.931 "ffdhe8192" 00:22:03.931 ] 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "bdev_nvme_set_hotplug", 00:22:03.931 "params": { 00:22:03.931 "period_us": 100000, 00:22:03.931 "enable": false 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "bdev_malloc_create", 00:22:03.931 "params": { 00:22:03.931 "name": "malloc0", 00:22:03.931 "num_blocks": 8192, 00:22:03.931 "block_size": 4096, 00:22:03.931 "physical_block_size": 4096, 00:22:03.931 "uuid": "059283a4-b19c-4317-917d-71dfd9e6d6e8", 00:22:03.931 "optimal_io_boundary": 0, 00:22:03.931 "md_size": 0, 00:22:03.931 "dif_type": 0, 00:22:03.931 "dif_is_head_of_md": false, 00:22:03.931 "dif_pi_format": 0 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "bdev_wait_for_examine" 00:22:03.931 } 00:22:03.931 ] 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "subsystem": "nbd", 00:22:03.931 "config": [] 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "subsystem": "scheduler", 00:22:03.931 "config": [ 00:22:03.931 { 00:22:03.931 "method": "framework_set_scheduler", 00:22:03.931 "params": { 00:22:03.931 "name": "static" 00:22:03.931 } 00:22:03.931 } 00:22:03.931 ] 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "subsystem": "nvmf", 00:22:03.931 "config": [ 00:22:03.931 { 00:22:03.931 "method": "nvmf_set_config", 00:22:03.931 "params": { 00:22:03.931 "discovery_filter": "match_any", 00:22:03.931 "admin_cmd_passthru": { 00:22:03.931 "identify_ctrlr": false 00:22:03.931 }, 00:22:03.931 "dhchap_digests": [ 00:22:03.931 "sha256", 00:22:03.931 "sha384", 00:22:03.931 "sha512" 00:22:03.931 ], 00:22:03.931 "dhchap_dhgroups": [ 00:22:03.931 "null", 00:22:03.931 "ffdhe2048", 00:22:03.931 "ffdhe3072", 00:22:03.931 "ffdhe4096", 00:22:03.931 "ffdhe6144", 00:22:03.931 "ffdhe8192" 00:22:03.931 ] 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "nvmf_set_max_subsystems", 00:22:03.931 "params": { 00:22:03.931 "max_subsystems": 1024 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "nvmf_set_crdt", 00:22:03.931 "params": { 00:22:03.931 "crdt1": 0, 00:22:03.931 "crdt2": 0, 00:22:03.931 "crdt3": 0 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "nvmf_create_transport", 00:22:03.931 "params": { 00:22:03.931 "trtype": "TCP", 00:22:03.931 "max_queue_depth": 128, 00:22:03.931 "max_io_qpairs_per_ctrlr": 127, 00:22:03.931 "in_capsule_data_size": 4096, 00:22:03.931 "max_io_size": 131072, 00:22:03.931 "io_unit_size": 131072, 00:22:03.931 "max_aq_depth": 128, 00:22:03.931 "num_shared_buffers": 511, 00:22:03.931 "buf_cache_size": 4294967295, 00:22:03.931 "dif_insert_or_strip": false, 00:22:03.931 "zcopy": false, 00:22:03.931 "c2h_success": false, 00:22:03.931 "sock_priority": 0, 00:22:03.931 "abort_timeout_sec": 1, 00:22:03.931 "ack_timeout": 0, 00:22:03.931 "data_wr_pool_size": 0 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "nvmf_create_subsystem", 00:22:03.931 "params": { 00:22:03.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.931 "allow_any_host": false, 00:22:03.931 "serial_number": "00000000000000000000", 00:22:03.931 "model_number": "SPDK bdev Controller", 00:22:03.931 "max_namespaces": 32, 00:22:03.931 "min_cntlid": 1, 00:22:03.931 "max_cntlid": 65519, 00:22:03.931 "ana_reporting": false 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "nvmf_subsystem_add_host", 00:22:03.931 "params": { 00:22:03.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.931 "host": "nqn.2016-06.io.spdk:host1", 00:22:03.931 "psk": "key0" 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "nvmf_subsystem_add_ns", 00:22:03.931 "params": { 00:22:03.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.931 "namespace": { 00:22:03.931 "nsid": 1, 00:22:03.931 "bdev_name": "malloc0", 00:22:03.931 "nguid": "059283A4B19C4317917D71DFD9E6D6E8", 00:22:03.931 "uuid": "059283a4-b19c-4317-917d-71dfd9e6d6e8", 00:22:03.931 "no_auto_visible": false 00:22:03.931 } 00:22:03.931 } 00:22:03.931 }, 00:22:03.931 { 00:22:03.931 "method": "nvmf_subsystem_add_listener", 00:22:03.931 "params": { 00:22:03.931 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.931 "listen_address": { 00:22:03.931 "trtype": "TCP", 00:22:03.931 "adrfam": "IPv4", 00:22:03.931 "traddr": "10.0.0.2", 00:22:03.931 "trsvcid": "4420" 00:22:03.931 }, 00:22:03.931 "secure_channel": false, 00:22:03.931 "sock_impl": "ssl" 00:22:03.931 } 00:22:03.931 } 00:22:03.931 ] 00:22:03.931 } 00:22:03.931 ] 00:22:03.931 }' 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3199503 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3199503 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3199503 ']' 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.931 13:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.931 [2024-11-25 13:21:01.479486] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:03.932 [2024-11-25 13:21:01.479571] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.932 [2024-11-25 13:21:01.553562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.189 [2024-11-25 13:21:01.612733] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.189 [2024-11-25 13:21:01.612779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.189 [2024-11-25 13:21:01.612792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.189 [2024-11-25 13:21:01.612803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.189 [2024-11-25 13:21:01.612813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.189 [2024-11-25 13:21:01.613443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.446 [2024-11-25 13:21:01.863808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.446 [2024-11-25 13:21:01.895845] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.446 [2024-11-25 13:21:01.896103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3199692 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3199692 /var/tmp/bdevperf.sock 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3199692 ']' 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:05.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:05.012 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:05.012 "subsystems": [ 00:22:05.012 { 00:22:05.012 "subsystem": "keyring", 00:22:05.012 "config": [ 00:22:05.012 { 00:22:05.012 "method": "keyring_file_add_key", 00:22:05.012 "params": { 00:22:05.012 "name": "key0", 00:22:05.012 "path": "/tmp/tmp.qFRJT7xJf9" 00:22:05.012 } 00:22:05.012 } 00:22:05.012 ] 00:22:05.012 }, 00:22:05.012 { 00:22:05.012 "subsystem": "iobuf", 00:22:05.012 "config": [ 00:22:05.012 { 00:22:05.012 "method": "iobuf_set_options", 00:22:05.012 "params": { 00:22:05.012 "small_pool_count": 8192, 00:22:05.012 "large_pool_count": 1024, 00:22:05.012 "small_bufsize": 8192, 00:22:05.012 "large_bufsize": 135168, 00:22:05.012 "enable_numa": false 00:22:05.012 } 00:22:05.012 } 00:22:05.012 ] 00:22:05.012 }, 00:22:05.012 { 00:22:05.012 "subsystem": "sock", 00:22:05.012 "config": [ 00:22:05.012 { 00:22:05.012 "method": "sock_set_default_impl", 00:22:05.012 "params": { 00:22:05.012 "impl_name": "posix" 00:22:05.012 } 00:22:05.012 }, 00:22:05.012 { 00:22:05.012 "method": "sock_impl_set_options", 00:22:05.012 "params": { 00:22:05.012 "impl_name": "ssl", 00:22:05.012 "recv_buf_size": 4096, 00:22:05.012 "send_buf_size": 4096, 00:22:05.012 "enable_recv_pipe": true, 00:22:05.012 "enable_quickack": false, 00:22:05.012 "enable_placement_id": 0, 00:22:05.012 "enable_zerocopy_send_server": true, 00:22:05.012 "enable_zerocopy_send_client": false, 00:22:05.012 "zerocopy_threshold": 0, 00:22:05.012 "tls_version": 0, 00:22:05.012 "enable_ktls": false 00:22:05.012 } 00:22:05.012 }, 00:22:05.012 { 00:22:05.012 "method": "sock_impl_set_options", 00:22:05.012 "params": { 00:22:05.012 "impl_name": "posix", 00:22:05.012 "recv_buf_size": 2097152, 00:22:05.012 "send_buf_size": 2097152, 00:22:05.012 "enable_recv_pipe": true, 00:22:05.012 "enable_quickack": false, 00:22:05.012 "enable_placement_id": 0, 00:22:05.012 "enable_zerocopy_send_server": true, 00:22:05.012 "enable_zerocopy_send_client": false, 00:22:05.012 "zerocopy_threshold": 0, 00:22:05.012 "tls_version": 0, 00:22:05.013 "enable_ktls": false 00:22:05.013 } 00:22:05.013 } 00:22:05.013 ] 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "subsystem": "vmd", 00:22:05.013 "config": [] 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "subsystem": "accel", 00:22:05.013 "config": [ 00:22:05.013 { 00:22:05.013 "method": "accel_set_options", 00:22:05.013 "params": { 00:22:05.013 "small_cache_size": 128, 00:22:05.013 "large_cache_size": 16, 00:22:05.013 "task_count": 2048, 00:22:05.013 "sequence_count": 2048, 00:22:05.013 "buf_count": 2048 00:22:05.013 } 00:22:05.013 } 00:22:05.013 ] 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "subsystem": "bdev", 00:22:05.013 "config": [ 00:22:05.013 { 00:22:05.013 "method": "bdev_set_options", 00:22:05.013 "params": { 00:22:05.013 "bdev_io_pool_size": 65535, 00:22:05.013 "bdev_io_cache_size": 256, 00:22:05.013 "bdev_auto_examine": true, 00:22:05.013 "iobuf_small_cache_size": 128, 00:22:05.013 "iobuf_large_cache_size": 16 00:22:05.013 } 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "method": "bdev_raid_set_options", 00:22:05.013 "params": { 00:22:05.013 "process_window_size_kb": 1024, 00:22:05.013 "process_max_bandwidth_mb_sec": 0 00:22:05.013 } 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "method": "bdev_iscsi_set_options", 00:22:05.013 "params": { 00:22:05.013 "timeout_sec": 30 00:22:05.013 } 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "method": "bdev_nvme_set_options", 00:22:05.013 "params": { 00:22:05.013 "action_on_timeout": "none", 00:22:05.013 "timeout_us": 0, 00:22:05.013 "timeout_admin_us": 0, 00:22:05.013 "keep_alive_timeout_ms": 10000, 00:22:05.013 "arbitration_burst": 0, 00:22:05.013 "low_priority_weight": 0, 00:22:05.013 "medium_priority_weight": 0, 00:22:05.013 "high_priority_weight": 0, 00:22:05.013 "nvme_adminq_poll_period_us": 10000, 00:22:05.013 "nvme_ioq_poll_period_us": 0, 00:22:05.013 "io_queue_requests": 512, 00:22:05.013 "delay_cmd_submit": true, 00:22:05.013 "transport_retry_count": 4, 00:22:05.013 "bdev_retry_count": 3, 00:22:05.013 "transport_ack_timeout": 0, 00:22:05.013 "ctrlr_loss_timeout_sec": 0, 00:22:05.013 "reconnect_delay_sec": 0, 00:22:05.013 "fast_io_fail_timeout_sec": 0, 00:22:05.013 "disable_auto_failback": false, 00:22:05.013 "generate_uuids": false, 00:22:05.013 "transport_tos": 0, 00:22:05.013 "nvme_error_stat": false, 00:22:05.013 "rdma_srq_size": 0, 00:22:05.013 "io_path_stat": false, 00:22:05.013 "allow_accel_sequence": false, 00:22:05.013 "rdma_max_cq_size": 0, 00:22:05.013 "rdma_cm_event_timeout_ms": 0 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.013 , 00:22:05.013 "dhchap_digests": [ 00:22:05.013 "sha256", 00:22:05.013 "sha384", 00:22:05.013 "sha512" 00:22:05.013 ], 00:22:05.013 "dhchap_dhgroups": [ 00:22:05.013 "null", 00:22:05.013 "ffdhe2048", 00:22:05.013 "ffdhe3072", 00:22:05.013 "ffdhe4096", 00:22:05.013 "ffdhe6144", 00:22:05.013 "ffdhe8192" 00:22:05.013 ] 00:22:05.013 } 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "method": "bdev_nvme_attach_controller", 00:22:05.013 "params": { 00:22:05.013 "name": "nvme0", 00:22:05.013 "trtype": "TCP", 00:22:05.013 "adrfam": "IPv4", 00:22:05.013 "traddr": "10.0.0.2", 00:22:05.013 "trsvcid": "4420", 00:22:05.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.013 "prchk_reftag": false, 00:22:05.013 "prchk_guard": false, 00:22:05.013 "ctrlr_loss_timeout_sec": 0, 00:22:05.013 "reconnect_delay_sec": 0, 00:22:05.013 "fast_io_fail_timeout_sec": 0, 00:22:05.013 "psk": "key0", 00:22:05.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.013 "hdgst": false, 00:22:05.013 "ddgst": false, 00:22:05.013 "multipath": "multipath" 00:22:05.013 } 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "method": "bdev_nvme_set_hotplug", 00:22:05.013 "params": { 00:22:05.013 "period_us": 100000, 00:22:05.013 "enable": false 00:22:05.013 } 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "method": "bdev_enable_histogram", 00:22:05.013 "params": { 00:22:05.013 "name": "nvme0n1", 00:22:05.013 "enable": true 00:22:05.013 } 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "method": "bdev_wait_for_examine" 00:22:05.013 } 00:22:05.013 ] 00:22:05.013 }, 00:22:05.013 { 00:22:05.013 "subsystem": "nbd", 00:22:05.013 "config": [] 00:22:05.013 } 00:22:05.013 ] 00:22:05.013 }' 00:22:05.013 13:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.013 [2024-11-25 13:21:02.592142] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:05.013 [2024-11-25 13:21:02.592239] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199692 ] 00:22:05.013 [2024-11-25 13:21:02.664236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.271 [2024-11-25 13:21:02.724433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.271 [2024-11-25 13:21:02.897872] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:05.530 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.530 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:05.530 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.530 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:05.787 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.787 13:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.787 Running I/O for 1 seconds... 00:22:07.161 3364.00 IOPS, 13.14 MiB/s 00:22:07.161 Latency(us) 00:22:07.161 [2024-11-25T12:21:04.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.161 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:07.161 Verification LBA range: start 0x0 length 0x2000 00:22:07.161 nvme0n1 : 1.02 3430.81 13.40 0.00 0.00 37010.69 6043.88 31457.28 00:22:07.161 [2024-11-25T12:21:04.820Z] =================================================================================================================== 00:22:07.161 [2024-11-25T12:21:04.820Z] Total : 3430.81 13.40 0.00 0.00 37010.69 6043.88 31457.28 00:22:07.161 { 00:22:07.161 "results": [ 00:22:07.161 { 00:22:07.161 "job": "nvme0n1", 00:22:07.161 "core_mask": "0x2", 00:22:07.161 "workload": "verify", 00:22:07.161 "status": "finished", 00:22:07.161 "verify_range": { 00:22:07.161 "start": 0, 00:22:07.161 "length": 8192 00:22:07.161 }, 00:22:07.161 "queue_depth": 128, 00:22:07.161 "io_size": 4096, 00:22:07.161 "runtime": 1.017834, 00:22:07.161 "iops": 3430.8148480007544, 00:22:07.161 "mibps": 13.401620500002947, 00:22:07.161 "io_failed": 0, 00:22:07.161 "io_timeout": 0, 00:22:07.161 "avg_latency_us": 37010.688619065804, 00:22:07.161 "min_latency_us": 6043.875555555555, 00:22:07.161 "max_latency_us": 31457.28 00:22:07.161 } 00:22:07.161 ], 00:22:07.161 "core_count": 1 00:22:07.161 } 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:07.161 nvmf_trace.0 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3199692 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3199692 ']' 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3199692 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199692 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199692' 00:22:07.161 killing process with pid 3199692 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3199692 00:22:07.161 Received shutdown signal, test time was about 1.000000 seconds 00:22:07.161 00:22:07.161 Latency(us) 00:22:07.161 [2024-11-25T12:21:04.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.161 [2024-11-25T12:21:04.820Z] =================================================================================================================== 00:22:07.161 [2024-11-25T12:21:04.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3199692 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:07.161 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:07.161 rmmod nvme_tcp 00:22:07.420 rmmod nvme_fabrics 00:22:07.420 rmmod nvme_keyring 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3199503 ']' 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3199503 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3199503 ']' 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3199503 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199503 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199503' 00:22:07.420 killing process with pid 3199503 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3199503 00:22:07.420 13:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3199503 00:22:07.695 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.696 13:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.602 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:09.602 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.VuE6m34ny2 /tmp/tmp.23ydTxKtTj /tmp/tmp.qFRJT7xJf9 00:22:09.602 00:22:09.602 real 1m23.269s 00:22:09.602 user 2m20.583s 00:22:09.602 sys 0m24.632s 00:22:09.602 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:09.602 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.602 ************************************ 00:22:09.602 END TEST nvmf_tls 00:22:09.602 ************************************ 00:22:09.602 13:21:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:09.603 13:21:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:09.603 13:21:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.603 13:21:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:09.603 ************************************ 00:22:09.603 START TEST nvmf_fips 00:22:09.603 ************************************ 00:22:09.603 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:09.862 * Looking for test storage... 00:22:09.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:09.862 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.863 --rc genhtml_branch_coverage=1 00:22:09.863 --rc genhtml_function_coverage=1 00:22:09.863 --rc genhtml_legend=1 00:22:09.863 --rc geninfo_all_blocks=1 00:22:09.863 --rc geninfo_unexecuted_blocks=1 00:22:09.863 00:22:09.863 ' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.863 --rc genhtml_branch_coverage=1 00:22:09.863 --rc genhtml_function_coverage=1 00:22:09.863 --rc genhtml_legend=1 00:22:09.863 --rc geninfo_all_blocks=1 00:22:09.863 --rc geninfo_unexecuted_blocks=1 00:22:09.863 00:22:09.863 ' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.863 --rc genhtml_branch_coverage=1 00:22:09.863 --rc genhtml_function_coverage=1 00:22:09.863 --rc genhtml_legend=1 00:22:09.863 --rc geninfo_all_blocks=1 00:22:09.863 --rc geninfo_unexecuted_blocks=1 00:22:09.863 00:22:09.863 ' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.863 --rc genhtml_branch_coverage=1 00:22:09.863 --rc genhtml_function_coverage=1 00:22:09.863 --rc genhtml_legend=1 00:22:09.863 --rc geninfo_all_blocks=1 00:22:09.863 --rc geninfo_unexecuted_blocks=1 00:22:09.863 00:22:09.863 ' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:09.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:09.863 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:09.864 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:10.123 Error setting digest 00:22:10.123 40426D348A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:10.123 40426D348A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.123 13:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:12.025 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:12.025 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:12.025 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:12.026 Found net devices under 0000:09:00.0: cvl_0_0 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:12.026 Found net devices under 0000:09:00.1: cvl_0_1 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.026 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:22:12.285 00:22:12.285 --- 10.0.0.2 ping statistics --- 00:22:12.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.285 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:22:12.285 00:22:12.285 --- 10.0.0.1 ping statistics --- 00:22:12.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.285 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.285 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3202553 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3202553 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3202553 ']' 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.286 13:21:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.286 [2024-11-25 13:21:09.858149] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:12.286 [2024-11-25 13:21:09.858267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.286 [2024-11-25 13:21:09.928365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.544 [2024-11-25 13:21:09.982174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.544 [2024-11-25 13:21:09.982229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.544 [2024-11-25 13:21:09.982252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.544 [2024-11-25 13:21:09.982262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.545 [2024-11-25 13:21:09.982272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.545 [2024-11-25 13:21:09.982883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.oQd 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.oQd 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.oQd 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.oQd 00:22:12.545 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:12.806 [2024-11-25 13:21:10.397519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.806 [2024-11-25 13:21:10.413545] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:12.806 [2024-11-25 13:21:10.413779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.806 malloc0 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3202576 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3202576 /var/tmp/bdevperf.sock 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3202576 ']' 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.064 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:13.064 [2024-11-25 13:21:10.549835] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:13.064 [2024-11-25 13:21:10.549946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202576 ] 00:22:13.064 [2024-11-25 13:21:10.624940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.064 [2024-11-25 13:21:10.685913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.322 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.322 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:13.322 13:21:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.oQd 00:22:13.629 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:13.886 [2024-11-25 13:21:11.301397] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.886 TLSTESTn1 00:22:13.886 13:21:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.887 Running I/O for 10 seconds... 00:22:16.191 3529.00 IOPS, 13.79 MiB/s [2024-11-25T12:21:14.783Z] 3590.00 IOPS, 14.02 MiB/s [2024-11-25T12:21:15.715Z] 3599.67 IOPS, 14.06 MiB/s [2024-11-25T12:21:16.646Z] 3609.50 IOPS, 14.10 MiB/s [2024-11-25T12:21:17.579Z] 3614.20 IOPS, 14.12 MiB/s [2024-11-25T12:21:18.951Z] 3612.00 IOPS, 14.11 MiB/s [2024-11-25T12:21:19.516Z] 3611.57 IOPS, 14.11 MiB/s [2024-11-25T12:21:20.888Z] 3613.38 IOPS, 14.11 MiB/s [2024-11-25T12:21:21.822Z] 3609.67 IOPS, 14.10 MiB/s [2024-11-25T12:21:21.822Z] 3613.10 IOPS, 14.11 MiB/s 00:22:24.163 Latency(us) 00:22:24.163 [2024-11-25T12:21:21.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.163 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:24.163 Verification LBA range: start 0x0 length 0x2000 00:22:24.163 TLSTESTn1 : 10.02 3619.00 14.14 0.00 0.00 35311.69 6941.96 35146.71 00:22:24.163 [2024-11-25T12:21:21.822Z] =================================================================================================================== 00:22:24.163 [2024-11-25T12:21:21.822Z] Total : 3619.00 14.14 0.00 0.00 35311.69 6941.96 35146.71 00:22:24.163 { 00:22:24.163 "results": [ 00:22:24.163 { 00:22:24.163 "job": "TLSTESTn1", 00:22:24.163 "core_mask": "0x4", 00:22:24.163 "workload": "verify", 00:22:24.163 "status": "finished", 00:22:24.163 "verify_range": { 00:22:24.163 "start": 0, 00:22:24.163 "length": 8192 00:22:24.163 }, 00:22:24.163 "queue_depth": 128, 00:22:24.163 "io_size": 4096, 00:22:24.163 "runtime": 10.018515, 00:22:24.163 "iops": 3618.9994225691134, 00:22:24.163 "mibps": 14.1367164944106, 00:22:24.163 "io_failed": 0, 00:22:24.163 "io_timeout": 0, 00:22:24.163 "avg_latency_us": 35311.68897526812, 00:22:24.163 "min_latency_us": 6941.961481481481, 00:22:24.163 "max_latency_us": 35146.71407407407 00:22:24.163 } 00:22:24.163 ], 00:22:24.163 "core_count": 1 00:22:24.163 } 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:24.163 nvmf_trace.0 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3202576 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3202576 ']' 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3202576 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3202576 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3202576' 00:22:24.163 killing process with pid 3202576 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3202576 00:22:24.163 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.163 00:22:24.163 Latency(us) 00:22:24.163 [2024-11-25T12:21:21.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.163 [2024-11-25T12:21:21.822Z] =================================================================================================================== 00:22:24.163 [2024-11-25T12:21:21.822Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.163 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3202576 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.421 rmmod nvme_tcp 00:22:24.421 rmmod nvme_fabrics 00:22:24.421 rmmod nvme_keyring 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3202553 ']' 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3202553 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3202553 ']' 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3202553 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.421 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3202553 00:22:24.421 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:24.421 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:24.421 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3202553' 00:22:24.421 killing process with pid 3202553 00:22:24.421 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3202553 00:22:24.421 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3202553 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.681 13:21:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.oQd 00:22:27.217 00:22:27.217 real 0m17.057s 00:22:27.217 user 0m22.659s 00:22:27.217 sys 0m5.363s 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:27.217 ************************************ 00:22:27.217 END TEST nvmf_fips 00:22:27.217 ************************************ 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.217 ************************************ 00:22:27.217 START TEST nvmf_control_msg_list 00:22:27.217 ************************************ 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:27.217 * Looking for test storage... 00:22:27.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.217 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:27.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.217 --rc genhtml_branch_coverage=1 00:22:27.217 --rc genhtml_function_coverage=1 00:22:27.218 --rc genhtml_legend=1 00:22:27.218 --rc geninfo_all_blocks=1 00:22:27.218 --rc geninfo_unexecuted_blocks=1 00:22:27.218 00:22:27.218 ' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:27.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.218 --rc genhtml_branch_coverage=1 00:22:27.218 --rc genhtml_function_coverage=1 00:22:27.218 --rc genhtml_legend=1 00:22:27.218 --rc geninfo_all_blocks=1 00:22:27.218 --rc geninfo_unexecuted_blocks=1 00:22:27.218 00:22:27.218 ' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:27.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.218 --rc genhtml_branch_coverage=1 00:22:27.218 --rc genhtml_function_coverage=1 00:22:27.218 --rc genhtml_legend=1 00:22:27.218 --rc geninfo_all_blocks=1 00:22:27.218 --rc geninfo_unexecuted_blocks=1 00:22:27.218 00:22:27.218 ' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:27.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.218 --rc genhtml_branch_coverage=1 00:22:27.218 --rc genhtml_function_coverage=1 00:22:27.218 --rc genhtml_legend=1 00:22:27.218 --rc geninfo_all_blocks=1 00:22:27.218 --rc geninfo_unexecuted_blocks=1 00:22:27.218 00:22:27.218 ' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.218 13:21:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:29.124 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:29.124 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:29.124 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:29.125 Found net devices under 0000:09:00.0: cvl_0_0 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:29.125 Found net devices under 0000:09:00.1: cvl_0_1 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:22:29.125 00:22:29.125 --- 10.0.0.2 ping statistics --- 00:22:29.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.125 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:22:29.125 00:22:29.125 --- 10.0.0.1 ping statistics --- 00:22:29.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.125 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3205957 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3205957 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3205957 ']' 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.125 13:21:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.384 [2024-11-25 13:21:26.785042] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:29.384 [2024-11-25 13:21:26.785133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.384 [2024-11-25 13:21:26.859897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.384 [2024-11-25 13:21:26.917342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.384 [2024-11-25 13:21:26.917399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.384 [2024-11-25 13:21:26.917412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.384 [2024-11-25 13:21:26.917423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.384 [2024-11-25 13:21:26.917433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.384 [2024-11-25 13:21:26.918061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.384 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.384 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:29.384 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.384 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.384 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 [2024-11-25 13:21:27.067620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 Malloc0 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.642 [2024-11-25 13:21:27.107917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3205988 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3205989 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3205990 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3205988 00:22:29.642 13:21:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.642 [2024-11-25 13:21:27.166463] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:29.642 [2024-11-25 13:21:27.176466] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:29.642 [2024-11-25 13:21:27.176749] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:31.015 Initializing NVMe Controllers 00:22:31.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:31.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:31.015 Initialization complete. Launching workers. 00:22:31.015 ======================================================== 00:22:31.015 Latency(us) 00:22:31.015 Device Information : IOPS MiB/s Average min max 00:22:31.015 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3363.00 13.14 296.94 192.82 40729.79 00:22:31.015 ======================================================== 00:22:31.015 Total : 3363.00 13.14 296.94 192.82 40729.79 00:22:31.015 00:22:31.015 Initializing NVMe Controllers 00:22:31.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:31.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:31.015 Initialization complete. Launching workers. 00:22:31.015 ======================================================== 00:22:31.015 Latency(us) 00:22:31.015 Device Information : IOPS MiB/s Average min max 00:22:31.015 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4717.00 18.43 211.62 146.79 611.59 00:22:31.015 ======================================================== 00:22:31.015 Total : 4717.00 18.43 211.62 146.79 611.59 00:22:31.015 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3205989 00:22:31.016 Initializing NVMe Controllers 00:22:31.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:31.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:31.016 Initialization complete. Launching workers. 00:22:31.016 ======================================================== 00:22:31.016 Latency(us) 00:22:31.016 Device Information : IOPS MiB/s Average min max 00:22:31.016 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40882.25 40487.03 41008.93 00:22:31.016 ======================================================== 00:22:31.016 Total : 25.00 0.10 40882.25 40487.03 41008.93 00:22:31.016 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3205990 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.016 rmmod nvme_tcp 00:22:31.016 rmmod nvme_fabrics 00:22:31.016 rmmod nvme_keyring 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3205957 ']' 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3205957 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3205957 ']' 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3205957 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3205957 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3205957' 00:22:31.016 killing process with pid 3205957 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3205957 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3205957 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.016 13:21:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.551 00:22:33.551 real 0m6.318s 00:22:33.551 user 0m5.511s 00:22:33.551 sys 0m2.686s 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:33.551 ************************************ 00:22:33.551 END TEST nvmf_control_msg_list 00:22:33.551 ************************************ 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:33.551 ************************************ 00:22:33.551 START TEST nvmf_wait_for_buf 00:22:33.551 ************************************ 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:33.551 * Looking for test storage... 00:22:33.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:33.551 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:33.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.552 --rc genhtml_branch_coverage=1 00:22:33.552 --rc genhtml_function_coverage=1 00:22:33.552 --rc genhtml_legend=1 00:22:33.552 --rc geninfo_all_blocks=1 00:22:33.552 --rc geninfo_unexecuted_blocks=1 00:22:33.552 00:22:33.552 ' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:33.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.552 --rc genhtml_branch_coverage=1 00:22:33.552 --rc genhtml_function_coverage=1 00:22:33.552 --rc genhtml_legend=1 00:22:33.552 --rc geninfo_all_blocks=1 00:22:33.552 --rc geninfo_unexecuted_blocks=1 00:22:33.552 00:22:33.552 ' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:33.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.552 --rc genhtml_branch_coverage=1 00:22:33.552 --rc genhtml_function_coverage=1 00:22:33.552 --rc genhtml_legend=1 00:22:33.552 --rc geninfo_all_blocks=1 00:22:33.552 --rc geninfo_unexecuted_blocks=1 00:22:33.552 00:22:33.552 ' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:33.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.552 --rc genhtml_branch_coverage=1 00:22:33.552 --rc genhtml_function_coverage=1 00:22:33.552 --rc genhtml_legend=1 00:22:33.552 --rc geninfo_all_blocks=1 00:22:33.552 --rc geninfo_unexecuted_blocks=1 00:22:33.552 00:22:33.552 ' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.552 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.453 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:35.454 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:35.712 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:35.712 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:35.712 Found net devices under 0000:09:00.0: cvl_0_0 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:35.712 Found net devices under 0000:09:00.1: cvl_0_1 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.712 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:35.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:22:35.713 00:22:35.713 --- 10.0.0.2 ping statistics --- 00:22:35.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.713 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:22:35.713 00:22:35.713 --- 10.0.0.1 ping statistics --- 00:22:35.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.713 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3208065 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3208065 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3208065 ']' 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.713 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:35.713 [2024-11-25 13:21:33.315705] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:35.713 [2024-11-25 13:21:33.315788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.971 [2024-11-25 13:21:33.390019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.971 [2024-11-25 13:21:33.447649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.971 [2024-11-25 13:21:33.447702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.971 [2024-11-25 13:21:33.447731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.971 [2024-11-25 13:21:33.447743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.971 [2024-11-25 13:21:33.447753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.971 [2024-11-25 13:21:33.448373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.971 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.229 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.229 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:36.229 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.229 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.229 Malloc0 00:22:36.229 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.229 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:36.229 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.230 [2024-11-25 13:21:33.698881] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.230 [2024-11-25 13:21:33.723040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.230 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:36.230 [2024-11-25 13:21:33.802456] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:37.601 Initializing NVMe Controllers 00:22:37.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:37.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:37.601 Initialization complete. Launching workers. 00:22:37.601 ======================================================== 00:22:37.601 Latency(us) 00:22:37.601 Device Information : IOPS MiB/s Average min max 00:22:37.601 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33563.04 8005.05 71834.67 00:22:37.601 ======================================================== 00:22:37.601 Total : 124.00 15.50 33563.04 8005.05 71834.67 00:22:37.601 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:37.601 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:37.601 rmmod nvme_tcp 00:22:37.601 rmmod nvme_fabrics 00:22:37.866 rmmod nvme_keyring 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3208065 ']' 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3208065 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3208065 ']' 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3208065 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208065 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208065' 00:22:37.866 killing process with pid 3208065 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3208065 00:22:37.866 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3208065 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.178 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:40.109 00:22:40.109 real 0m6.868s 00:22:40.109 user 0m3.290s 00:22:40.109 sys 0m2.060s 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:40.109 ************************************ 00:22:40.109 END TEST nvmf_wait_for_buf 00:22:40.109 ************************************ 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.109 13:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:42.642 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:42.642 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:42.642 Found net devices under 0000:09:00.0: cvl_0_0 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:42.642 Found net devices under 0000:09:00.1: cvl_0_1 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:42.642 ************************************ 00:22:42.642 START TEST nvmf_perf_adq 00:22:42.642 ************************************ 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:42.642 * Looking for test storage... 00:22:42.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.642 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:42.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.643 --rc genhtml_branch_coverage=1 00:22:42.643 --rc genhtml_function_coverage=1 00:22:42.643 --rc genhtml_legend=1 00:22:42.643 --rc geninfo_all_blocks=1 00:22:42.643 --rc geninfo_unexecuted_blocks=1 00:22:42.643 00:22:42.643 ' 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:42.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.643 --rc genhtml_branch_coverage=1 00:22:42.643 --rc genhtml_function_coverage=1 00:22:42.643 --rc genhtml_legend=1 00:22:42.643 --rc geninfo_all_blocks=1 00:22:42.643 --rc geninfo_unexecuted_blocks=1 00:22:42.643 00:22:42.643 ' 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:42.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.643 --rc genhtml_branch_coverage=1 00:22:42.643 --rc genhtml_function_coverage=1 00:22:42.643 --rc genhtml_legend=1 00:22:42.643 --rc geninfo_all_blocks=1 00:22:42.643 --rc geninfo_unexecuted_blocks=1 00:22:42.643 00:22:42.643 ' 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:42.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.643 --rc genhtml_branch_coverage=1 00:22:42.643 --rc genhtml_function_coverage=1 00:22:42.643 --rc genhtml_legend=1 00:22:42.643 --rc geninfo_all_blocks=1 00:22:42.643 --rc geninfo_unexecuted_blocks=1 00:22:42.643 00:22:42.643 ' 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.643 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:42.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:42.644 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:44.550 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:44.550 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.550 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:44.550 Found net devices under 0000:09:00.0: cvl_0_0 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:44.550 Found net devices under 0000:09:00.1: cvl_0_1 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:44.550 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:45.121 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:47.021 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.299 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:52.300 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:52.300 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:52.300 Found net devices under 0000:09:00.0: cvl_0_0 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:52.300 Found net devices under 0000:09:00.1: cvl_0_1 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:22:52.300 00:22:52.300 --- 10.0.0.2 ping statistics --- 00:22:52.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.300 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:22:52.300 00:22:52.300 --- 10.0.0.1 ping statistics --- 00:22:52.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.300 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3212905 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3212905 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3212905 ']' 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.300 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.301 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.301 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.301 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.301 [2024-11-25 13:21:49.864624] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:22:52.301 [2024-11-25 13:21:49.864705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.301 [2024-11-25 13:21:49.945201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.559 [2024-11-25 13:21:50.011345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.559 [2024-11-25 13:21:50.011401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.559 [2024-11-25 13:21:50.011416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.559 [2024-11-25 13:21:50.011428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.559 [2024-11-25 13:21:50.011439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.559 [2024-11-25 13:21:50.013118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.559 [2024-11-25 13:21:50.013148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.559 [2024-11-25 13:21:50.013203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.559 [2024-11-25 13:21:50.013206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.559 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.817 [2024-11-25 13:21:50.309186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.817 Malloc1 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.817 [2024-11-25 13:21:50.375649] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3212940 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:52.817 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:55.345 "tick_rate": 2700000000, 00:22:55.345 "poll_groups": [ 00:22:55.345 { 00:22:55.345 "name": "nvmf_tgt_poll_group_000", 00:22:55.345 "admin_qpairs": 1, 00:22:55.345 "io_qpairs": 1, 00:22:55.345 "current_admin_qpairs": 1, 00:22:55.345 "current_io_qpairs": 1, 00:22:55.345 "pending_bdev_io": 0, 00:22:55.345 "completed_nvme_io": 19770, 00:22:55.345 "transports": [ 00:22:55.345 { 00:22:55.345 "trtype": "TCP" 00:22:55.345 } 00:22:55.345 ] 00:22:55.345 }, 00:22:55.345 { 00:22:55.345 "name": "nvmf_tgt_poll_group_001", 00:22:55.345 "admin_qpairs": 0, 00:22:55.345 "io_qpairs": 1, 00:22:55.345 "current_admin_qpairs": 0, 00:22:55.345 "current_io_qpairs": 1, 00:22:55.345 "pending_bdev_io": 0, 00:22:55.345 "completed_nvme_io": 18991, 00:22:55.345 "transports": [ 00:22:55.345 { 00:22:55.345 "trtype": "TCP" 00:22:55.345 } 00:22:55.345 ] 00:22:55.345 }, 00:22:55.345 { 00:22:55.345 "name": "nvmf_tgt_poll_group_002", 00:22:55.345 "admin_qpairs": 0, 00:22:55.345 "io_qpairs": 1, 00:22:55.345 "current_admin_qpairs": 0, 00:22:55.345 "current_io_qpairs": 1, 00:22:55.345 "pending_bdev_io": 0, 00:22:55.345 "completed_nvme_io": 20214, 00:22:55.345 "transports": [ 00:22:55.345 { 00:22:55.345 "trtype": "TCP" 00:22:55.345 } 00:22:55.345 ] 00:22:55.345 }, 00:22:55.345 { 00:22:55.345 "name": "nvmf_tgt_poll_group_003", 00:22:55.345 "admin_qpairs": 0, 00:22:55.345 "io_qpairs": 1, 00:22:55.345 "current_admin_qpairs": 0, 00:22:55.345 "current_io_qpairs": 1, 00:22:55.345 "pending_bdev_io": 0, 00:22:55.345 "completed_nvme_io": 19594, 00:22:55.345 "transports": [ 00:22:55.345 { 00:22:55.345 "trtype": "TCP" 00:22:55.345 } 00:22:55.345 ] 00:22:55.345 } 00:22:55.345 ] 00:22:55.345 }' 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:55.345 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3212940 00:23:03.455 [2024-11-25 13:22:00.523895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dcdc0 is same with the state(6) to be set 00:23:03.455 Initializing NVMe Controllers 00:23:03.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:03.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:03.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:03.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:03.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:03.455 Initialization complete. Launching workers. 00:23:03.455 ======================================================== 00:23:03.455 Latency(us) 00:23:03.455 Device Information : IOPS MiB/s Average min max 00:23:03.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10395.34 40.61 6156.75 2160.61 10219.99 00:23:03.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10112.16 39.50 6330.55 2407.72 10507.13 00:23:03.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10634.92 41.54 6017.43 2251.78 9664.11 00:23:03.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10437.84 40.77 6131.43 2400.74 9938.73 00:23:03.455 ======================================================== 00:23:03.455 Total : 41580.26 162.42 6157.03 2160.61 10507.13 00:23:03.455 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.455 rmmod nvme_tcp 00:23:03.455 rmmod nvme_fabrics 00:23:03.455 rmmod nvme_keyring 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3212905 ']' 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3212905 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3212905 ']' 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3212905 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3212905 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3212905' 00:23:03.455 killing process with pid 3212905 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3212905 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3212905 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:03.455 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:03.456 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.456 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.456 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.456 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.456 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.456 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.456 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.360 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.360 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:05.360 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:05.360 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:06.296 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:08.200 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.542 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:13.543 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:13.543 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:13.543 Found net devices under 0000:09:00.0: cvl_0_0 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:13.543 Found net devices under 0000:09:00.1: cvl_0_1 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.543 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:23:13.544 00:23:13.544 --- 10.0.0.2 ping statistics --- 00:23:13.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.544 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:23:13.544 00:23:13.544 --- 10.0.0.1 ping statistics --- 00:23:13.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.544 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:13.544 net.core.busy_poll = 1 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:13.544 net.core.busy_read = 1 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.544 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3215562 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3215562 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3215562 ']' 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.544 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.544 [2024-11-25 13:22:11.057897] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:13.544 [2024-11-25 13:22:11.057986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.544 [2024-11-25 13:22:11.131791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:13.544 [2024-11-25 13:22:11.191594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.544 [2024-11-25 13:22:11.191647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.544 [2024-11-25 13:22:11.191677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.544 [2024-11-25 13:22:11.191688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.544 [2024-11-25 13:22:11.191698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.544 [2024-11-25 13:22:11.193301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.544 [2024-11-25 13:22:11.193363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.544 [2024-11-25 13:22:11.193431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.544 [2024-11-25 13:22:11.193434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.803 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 [2024-11-25 13:22:11.490137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 Malloc1 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 [2024-11-25 13:22:11.552986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3215708 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:14.062 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:16.044 "tick_rate": 2700000000, 00:23:16.044 "poll_groups": [ 00:23:16.044 { 00:23:16.044 "name": "nvmf_tgt_poll_group_000", 00:23:16.044 "admin_qpairs": 1, 00:23:16.044 "io_qpairs": 0, 00:23:16.044 "current_admin_qpairs": 1, 00:23:16.044 "current_io_qpairs": 0, 00:23:16.044 "pending_bdev_io": 0, 00:23:16.044 "completed_nvme_io": 0, 00:23:16.044 "transports": [ 00:23:16.044 { 00:23:16.044 "trtype": "TCP" 00:23:16.044 } 00:23:16.044 ] 00:23:16.044 }, 00:23:16.044 { 00:23:16.044 "name": "nvmf_tgt_poll_group_001", 00:23:16.044 "admin_qpairs": 0, 00:23:16.044 "io_qpairs": 4, 00:23:16.044 "current_admin_qpairs": 0, 00:23:16.044 "current_io_qpairs": 4, 00:23:16.044 "pending_bdev_io": 0, 00:23:16.044 "completed_nvme_io": 33081, 00:23:16.044 "transports": [ 00:23:16.044 { 00:23:16.044 "trtype": "TCP" 00:23:16.044 } 00:23:16.044 ] 00:23:16.044 }, 00:23:16.044 { 00:23:16.044 "name": "nvmf_tgt_poll_group_002", 00:23:16.044 "admin_qpairs": 0, 00:23:16.044 "io_qpairs": 0, 00:23:16.044 "current_admin_qpairs": 0, 00:23:16.044 "current_io_qpairs": 0, 00:23:16.044 "pending_bdev_io": 0, 00:23:16.044 "completed_nvme_io": 0, 00:23:16.044 "transports": [ 00:23:16.044 { 00:23:16.044 "trtype": "TCP" 00:23:16.044 } 00:23:16.044 ] 00:23:16.044 }, 00:23:16.044 { 00:23:16.044 "name": "nvmf_tgt_poll_group_003", 00:23:16.044 "admin_qpairs": 0, 00:23:16.044 "io_qpairs": 0, 00:23:16.044 "current_admin_qpairs": 0, 00:23:16.044 "current_io_qpairs": 0, 00:23:16.044 "pending_bdev_io": 0, 00:23:16.044 "completed_nvme_io": 0, 00:23:16.044 "transports": [ 00:23:16.044 { 00:23:16.044 "trtype": "TCP" 00:23:16.044 } 00:23:16.044 ] 00:23:16.044 } 00:23:16.044 ] 00:23:16.044 }' 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:23:16.044 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3215708 00:23:24.176 Initializing NVMe Controllers 00:23:24.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:24.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:24.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:24.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:24.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:24.176 Initialization complete. Launching workers. 00:23:24.176 ======================================================== 00:23:24.176 Latency(us) 00:23:24.176 Device Information : IOPS MiB/s Average min max 00:23:24.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4656.59 18.19 13746.70 1884.83 61888.56 00:23:24.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4467.79 17.45 14356.93 1952.74 60628.48 00:23:24.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4415.70 17.25 14495.81 1832.08 62354.62 00:23:24.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3941.02 15.39 16238.45 1877.53 63942.66 00:23:24.176 ======================================================== 00:23:24.176 Total : 17481.09 68.29 14653.64 1832.08 63942.66 00:23:24.176 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.176 rmmod nvme_tcp 00:23:24.176 rmmod nvme_fabrics 00:23:24.176 rmmod nvme_keyring 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3215562 ']' 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3215562 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3215562 ']' 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3215562 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.176 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3215562 00:23:24.434 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.434 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.434 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3215562' 00:23:24.434 killing process with pid 3215562 00:23:24.434 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3215562 00:23:24.434 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3215562 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.692 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:26.596 00:23:26.596 real 0m44.359s 00:23:26.596 user 2m39.585s 00:23:26.596 sys 0m10.151s 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:26.596 ************************************ 00:23:26.596 END TEST nvmf_perf_adq 00:23:26.596 ************************************ 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:26.596 ************************************ 00:23:26.596 START TEST nvmf_shutdown 00:23:26.596 ************************************ 00:23:26.596 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:26.596 * Looking for test storage... 00:23:26.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.855 --rc genhtml_branch_coverage=1 00:23:26.855 --rc genhtml_function_coverage=1 00:23:26.855 --rc genhtml_legend=1 00:23:26.855 --rc geninfo_all_blocks=1 00:23:26.855 --rc geninfo_unexecuted_blocks=1 00:23:26.855 00:23:26.855 ' 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.855 --rc genhtml_branch_coverage=1 00:23:26.855 --rc genhtml_function_coverage=1 00:23:26.855 --rc genhtml_legend=1 00:23:26.855 --rc geninfo_all_blocks=1 00:23:26.855 --rc geninfo_unexecuted_blocks=1 00:23:26.855 00:23:26.855 ' 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.855 --rc genhtml_branch_coverage=1 00:23:26.855 --rc genhtml_function_coverage=1 00:23:26.855 --rc genhtml_legend=1 00:23:26.855 --rc geninfo_all_blocks=1 00:23:26.855 --rc geninfo_unexecuted_blocks=1 00:23:26.855 00:23:26.855 ' 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:26.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:26.855 --rc genhtml_branch_coverage=1 00:23:26.855 --rc genhtml_function_coverage=1 00:23:26.855 --rc genhtml_legend=1 00:23:26.855 --rc geninfo_all_blocks=1 00:23:26.855 --rc geninfo_unexecuted_blocks=1 00:23:26.855 00:23:26.855 ' 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.855 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:26.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:26.856 ************************************ 00:23:26.856 START TEST nvmf_shutdown_tc1 00:23:26.856 ************************************ 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.856 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:28.759 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:28.759 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.759 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:28.760 Found net devices under 0000:09:00.0: cvl_0_0 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:28.760 Found net devices under 0000:09:00.1: cvl_0_1 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.760 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:29.018 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:29.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:23:29.019 00:23:29.019 --- 10.0.0.2 ping statistics --- 00:23:29.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.019 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:23:29.019 00:23:29.019 --- 10.0.0.1 ping statistics --- 00:23:29.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.019 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3218881 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3218881 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3218881 ']' 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.019 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.019 [2024-11-25 13:22:26.624245] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:29.019 [2024-11-25 13:22:26.624344] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.277 [2024-11-25 13:22:26.702208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:29.277 [2024-11-25 13:22:26.761962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.277 [2024-11-25 13:22:26.762016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.277 [2024-11-25 13:22:26.762030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.277 [2024-11-25 13:22:26.762041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.277 [2024-11-25 13:22:26.762051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.277 [2024-11-25 13:22:26.763762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.277 [2024-11-25 13:22:26.763846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:29.277 [2024-11-25 13:22:26.763787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.277 [2024-11-25 13:22:26.763849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.277 [2024-11-25 13:22:26.923397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.277 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.536 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.536 Malloc1 00:23:29.536 [2024-11-25 13:22:27.022660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.536 Malloc2 00:23:29.536 Malloc3 00:23:29.536 Malloc4 00:23:29.795 Malloc5 00:23:29.795 Malloc6 00:23:29.795 Malloc7 00:23:29.795 Malloc8 00:23:29.795 Malloc9 00:23:29.795 Malloc10 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3219057 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3219057 /var/tmp/bdevperf.sock 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3219057 ']' 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:30.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.054 { 00:23:30.054 "params": { 00:23:30.054 "name": "Nvme$subsystem", 00:23:30.054 "trtype": "$TEST_TRANSPORT", 00:23:30.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.054 "adrfam": "ipv4", 00:23:30.054 "trsvcid": "$NVMF_PORT", 00:23:30.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.054 "hdgst": ${hdgst:-false}, 00:23:30.054 "ddgst": ${ddgst:-false} 00:23:30.054 }, 00:23:30.054 "method": "bdev_nvme_attach_controller" 00:23:30.054 } 00:23:30.054 EOF 00:23:30.054 )") 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.054 { 00:23:30.054 "params": { 00:23:30.054 "name": "Nvme$subsystem", 00:23:30.054 "trtype": "$TEST_TRANSPORT", 00:23:30.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.054 "adrfam": "ipv4", 00:23:30.054 "trsvcid": "$NVMF_PORT", 00:23:30.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.054 "hdgst": ${hdgst:-false}, 00:23:30.054 "ddgst": ${ddgst:-false} 00:23:30.054 }, 00:23:30.054 "method": "bdev_nvme_attach_controller" 00:23:30.054 } 00:23:30.054 EOF 00:23:30.054 )") 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.054 { 00:23:30.054 "params": { 00:23:30.054 "name": "Nvme$subsystem", 00:23:30.054 "trtype": "$TEST_TRANSPORT", 00:23:30.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.054 "adrfam": "ipv4", 00:23:30.054 "trsvcid": "$NVMF_PORT", 00:23:30.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.054 "hdgst": ${hdgst:-false}, 00:23:30.054 "ddgst": ${ddgst:-false} 00:23:30.054 }, 00:23:30.054 "method": "bdev_nvme_attach_controller" 00:23:30.054 } 00:23:30.054 EOF 00:23:30.054 )") 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.054 { 00:23:30.054 "params": { 00:23:30.054 "name": "Nvme$subsystem", 00:23:30.054 "trtype": "$TEST_TRANSPORT", 00:23:30.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.054 "adrfam": "ipv4", 00:23:30.054 "trsvcid": "$NVMF_PORT", 00:23:30.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.054 "hdgst": ${hdgst:-false}, 00:23:30.054 "ddgst": ${ddgst:-false} 00:23:30.054 }, 00:23:30.054 "method": "bdev_nvme_attach_controller" 00:23:30.054 } 00:23:30.054 EOF 00:23:30.054 )") 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.054 { 00:23:30.054 "params": { 00:23:30.054 "name": "Nvme$subsystem", 00:23:30.054 "trtype": "$TEST_TRANSPORT", 00:23:30.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.054 "adrfam": "ipv4", 00:23:30.054 "trsvcid": "$NVMF_PORT", 00:23:30.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.054 "hdgst": ${hdgst:-false}, 00:23:30.054 "ddgst": ${ddgst:-false} 00:23:30.054 }, 00:23:30.054 "method": "bdev_nvme_attach_controller" 00:23:30.054 } 00:23:30.054 EOF 00:23:30.054 )") 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.054 { 00:23:30.054 "params": { 00:23:30.054 "name": "Nvme$subsystem", 00:23:30.054 "trtype": "$TEST_TRANSPORT", 00:23:30.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.054 "adrfam": "ipv4", 00:23:30.054 "trsvcid": "$NVMF_PORT", 00:23:30.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.054 "hdgst": ${hdgst:-false}, 00:23:30.054 "ddgst": ${ddgst:-false} 00:23:30.054 }, 00:23:30.054 "method": "bdev_nvme_attach_controller" 00:23:30.054 } 00:23:30.054 EOF 00:23:30.054 )") 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.054 { 00:23:30.054 "params": { 00:23:30.054 "name": "Nvme$subsystem", 00:23:30.054 "trtype": "$TEST_TRANSPORT", 00:23:30.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.054 "adrfam": "ipv4", 00:23:30.054 "trsvcid": "$NVMF_PORT", 00:23:30.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.054 "hdgst": ${hdgst:-false}, 00:23:30.054 "ddgst": ${ddgst:-false} 00:23:30.054 }, 00:23:30.054 "method": "bdev_nvme_attach_controller" 00:23:30.054 } 00:23:30.054 EOF 00:23:30.054 )") 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.054 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.055 { 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme$subsystem", 00:23:30.055 "trtype": "$TEST_TRANSPORT", 00:23:30.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "$NVMF_PORT", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.055 "hdgst": ${hdgst:-false}, 00:23:30.055 "ddgst": ${ddgst:-false} 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 } 00:23:30.055 EOF 00:23:30.055 )") 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.055 { 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme$subsystem", 00:23:30.055 "trtype": "$TEST_TRANSPORT", 00:23:30.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "$NVMF_PORT", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.055 "hdgst": ${hdgst:-false}, 00:23:30.055 "ddgst": ${ddgst:-false} 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 } 00:23:30.055 EOF 00:23:30.055 )") 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.055 { 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme$subsystem", 00:23:30.055 "trtype": "$TEST_TRANSPORT", 00:23:30.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "$NVMF_PORT", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.055 "hdgst": ${hdgst:-false}, 00:23:30.055 "ddgst": ${ddgst:-false} 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 } 00:23:30.055 EOF 00:23:30.055 )") 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:30.055 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme1", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme2", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme3", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme4", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme5", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme6", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme7", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme8", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme9", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 },{ 00:23:30.055 "params": { 00:23:30.055 "name": "Nvme10", 00:23:30.055 "trtype": "tcp", 00:23:30.055 "traddr": "10.0.0.2", 00:23:30.055 "adrfam": "ipv4", 00:23:30.055 "trsvcid": "4420", 00:23:30.055 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:30.055 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:30.055 "hdgst": false, 00:23:30.055 "ddgst": false 00:23:30.055 }, 00:23:30.055 "method": "bdev_nvme_attach_controller" 00:23:30.055 }' 00:23:30.055 [2024-11-25 13:22:27.544186] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:30.055 [2024-11-25 13:22:27.544278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:30.055 [2024-11-25 13:22:27.617824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.055 [2024-11-25 13:22:27.678487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3219057 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:31.954 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:33.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3219057 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3218881 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.329 { 00:23:33.329 "params": { 00:23:33.329 "name": "Nvme$subsystem", 00:23:33.329 "trtype": "$TEST_TRANSPORT", 00:23:33.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.329 "adrfam": "ipv4", 00:23:33.329 "trsvcid": "$NVMF_PORT", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.329 "hdgst": ${hdgst:-false}, 00:23:33.329 "ddgst": ${ddgst:-false} 00:23:33.329 }, 00:23:33.329 "method": "bdev_nvme_attach_controller" 00:23:33.329 } 00:23:33.329 EOF 00:23:33.329 )") 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.329 { 00:23:33.329 "params": { 00:23:33.329 "name": "Nvme$subsystem", 00:23:33.329 "trtype": "$TEST_TRANSPORT", 00:23:33.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.329 "adrfam": "ipv4", 00:23:33.329 "trsvcid": "$NVMF_PORT", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.329 "hdgst": ${hdgst:-false}, 00:23:33.329 "ddgst": ${ddgst:-false} 00:23:33.329 }, 00:23:33.329 "method": "bdev_nvme_attach_controller" 00:23:33.329 } 00:23:33.329 EOF 00:23:33.329 )") 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.329 { 00:23:33.329 "params": { 00:23:33.329 "name": "Nvme$subsystem", 00:23:33.329 "trtype": "$TEST_TRANSPORT", 00:23:33.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.329 "adrfam": "ipv4", 00:23:33.329 "trsvcid": "$NVMF_PORT", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.329 "hdgst": ${hdgst:-false}, 00:23:33.329 "ddgst": ${ddgst:-false} 00:23:33.329 }, 00:23:33.329 "method": "bdev_nvme_attach_controller" 00:23:33.329 } 00:23:33.329 EOF 00:23:33.329 )") 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.329 { 00:23:33.329 "params": { 00:23:33.329 "name": "Nvme$subsystem", 00:23:33.329 "trtype": "$TEST_TRANSPORT", 00:23:33.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.329 "adrfam": "ipv4", 00:23:33.329 "trsvcid": "$NVMF_PORT", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.329 "hdgst": ${hdgst:-false}, 00:23:33.329 "ddgst": ${ddgst:-false} 00:23:33.329 }, 00:23:33.329 "method": "bdev_nvme_attach_controller" 00:23:33.329 } 00:23:33.329 EOF 00:23:33.329 )") 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.329 { 00:23:33.329 "params": { 00:23:33.329 "name": "Nvme$subsystem", 00:23:33.329 "trtype": "$TEST_TRANSPORT", 00:23:33.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.329 "adrfam": "ipv4", 00:23:33.329 "trsvcid": "$NVMF_PORT", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.329 "hdgst": ${hdgst:-false}, 00:23:33.329 "ddgst": ${ddgst:-false} 00:23:33.329 }, 00:23:33.329 "method": "bdev_nvme_attach_controller" 00:23:33.329 } 00:23:33.329 EOF 00:23:33.329 )") 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.329 { 00:23:33.329 "params": { 00:23:33.329 "name": "Nvme$subsystem", 00:23:33.329 "trtype": "$TEST_TRANSPORT", 00:23:33.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.329 "adrfam": "ipv4", 00:23:33.329 "trsvcid": "$NVMF_PORT", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.329 "hdgst": ${hdgst:-false}, 00:23:33.329 "ddgst": ${ddgst:-false} 00:23:33.329 }, 00:23:33.329 "method": "bdev_nvme_attach_controller" 00:23:33.329 } 00:23:33.329 EOF 00:23:33.329 )") 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.329 { 00:23:33.329 "params": { 00:23:33.329 "name": "Nvme$subsystem", 00:23:33.329 "trtype": "$TEST_TRANSPORT", 00:23:33.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.329 "adrfam": "ipv4", 00:23:33.329 "trsvcid": "$NVMF_PORT", 00:23:33.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.329 "hdgst": ${hdgst:-false}, 00:23:33.329 "ddgst": ${ddgst:-false} 00:23:33.329 }, 00:23:33.329 "method": "bdev_nvme_attach_controller" 00:23:33.329 } 00:23:33.329 EOF 00:23:33.329 )") 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.329 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.329 { 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme$subsystem", 00:23:33.330 "trtype": "$TEST_TRANSPORT", 00:23:33.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "$NVMF_PORT", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.330 "hdgst": ${hdgst:-false}, 00:23:33.330 "ddgst": ${ddgst:-false} 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 } 00:23:33.330 EOF 00:23:33.330 )") 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.330 { 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme$subsystem", 00:23:33.330 "trtype": "$TEST_TRANSPORT", 00:23:33.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "$NVMF_PORT", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.330 "hdgst": ${hdgst:-false}, 00:23:33.330 "ddgst": ${ddgst:-false} 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 } 00:23:33.330 EOF 00:23:33.330 )") 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:33.330 { 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme$subsystem", 00:23:33.330 "trtype": "$TEST_TRANSPORT", 00:23:33.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "$NVMF_PORT", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.330 "hdgst": ${hdgst:-false}, 00:23:33.330 "ddgst": ${ddgst:-false} 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 } 00:23:33.330 EOF 00:23:33.330 )") 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:33.330 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme1", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme2", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme3", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme4", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme5", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme6", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme7", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme8", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme9", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 },{ 00:23:33.330 "params": { 00:23:33.330 "name": "Nvme10", 00:23:33.330 "trtype": "tcp", 00:23:33.330 "traddr": "10.0.0.2", 00:23:33.330 "adrfam": "ipv4", 00:23:33.330 "trsvcid": "4420", 00:23:33.330 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:33.330 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:33.330 "hdgst": false, 00:23:33.330 "ddgst": false 00:23:33.330 }, 00:23:33.330 "method": "bdev_nvme_attach_controller" 00:23:33.330 }' 00:23:33.330 [2024-11-25 13:22:30.621126] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:33.330 [2024-11-25 13:22:30.621216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219475 ] 00:23:33.330 [2024-11-25 13:22:30.696605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.331 [2024-11-25 13:22:30.759353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.704 Running I/O for 1 seconds... 00:23:35.895 1797.00 IOPS, 112.31 MiB/s 00:23:35.895 Latency(us) 00:23:35.895 [2024-11-25T12:22:33.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.895 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme1n1 : 1.10 231.97 14.50 0.00 0.00 271628.14 31651.46 237677.23 00:23:35.895 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme2n1 : 1.10 236.29 14.77 0.00 0.00 260844.16 11747.93 248551.35 00:23:35.895 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme3n1 : 1.10 232.46 14.53 0.00 0.00 263364.27 24369.68 260978.92 00:23:35.895 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme4n1 : 1.12 229.37 14.34 0.00 0.00 262450.06 18155.90 259425.47 00:23:35.895 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme5n1 : 1.14 223.75 13.98 0.00 0.00 264834.09 20388.98 260978.92 00:23:35.895 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme6n1 : 1.13 226.98 14.19 0.00 0.00 256413.01 31457.28 251658.24 00:23:35.895 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme7n1 : 1.12 227.71 14.23 0.00 0.00 250996.62 18544.26 256318.58 00:23:35.895 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme8n1 : 1.14 228.69 14.29 0.00 0.00 245468.45 1978.22 262532.36 00:23:35.895 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme9n1 : 1.17 219.29 13.71 0.00 0.00 252903.16 23592.96 271853.04 00:23:35.895 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:35.895 Verification LBA range: start 0x0 length 0x400 00:23:35.895 Nvme10n1 : 1.18 270.88 16.93 0.00 0.00 201419.78 4490.43 281173.71 00:23:35.895 [2024-11-25T12:22:33.554Z] =================================================================================================================== 00:23:35.895 [2024-11-25T12:22:33.554Z] Total : 2327.38 145.46 0.00 0.00 251780.98 1978.22 281173.71 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.153 rmmod nvme_tcp 00:23:36.153 rmmod nvme_fabrics 00:23:36.153 rmmod nvme_keyring 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3218881 ']' 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3218881 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3218881 ']' 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3218881 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3218881 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3218881' 00:23:36.153 killing process with pid 3218881 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3218881 00:23:36.153 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3218881 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.719 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:39.261 00:23:39.261 real 0m11.963s 00:23:39.261 user 0m34.997s 00:23:39.261 sys 0m3.233s 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.261 ************************************ 00:23:39.261 END TEST nvmf_shutdown_tc1 00:23:39.261 ************************************ 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:39.261 ************************************ 00:23:39.261 START TEST nvmf_shutdown_tc2 00:23:39.261 ************************************ 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.261 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:39.262 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:39.262 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:39.262 Found net devices under 0000:09:00.0: cvl_0_0 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:39.262 Found net devices under 0000:09:00.1: cvl_0_1 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:23:39.262 00:23:39.262 --- 10.0.0.2 ping statistics --- 00:23:39.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.262 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:23:39.262 00:23:39.262 --- 10.0.0.1 ping statistics --- 00:23:39.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.262 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.262 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3220247 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3220247 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3220247 ']' 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.263 [2024-11-25 13:22:36.617478] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:39.263 [2024-11-25 13:22:36.617556] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.263 [2024-11-25 13:22:36.684462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.263 [2024-11-25 13:22:36.739226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.263 [2024-11-25 13:22:36.739281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.263 [2024-11-25 13:22:36.739317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.263 [2024-11-25 13:22:36.739330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.263 [2024-11-25 13:22:36.739339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.263 [2024-11-25 13:22:36.740858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.263 [2024-11-25 13:22:36.740918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.263 [2024-11-25 13:22:36.740985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:39.263 [2024-11-25 13:22:36.740988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.263 [2024-11-25 13:22:36.881774] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.263 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.521 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.522 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.522 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:39.522 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:39.522 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:39.522 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.522 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.522 Malloc1 00:23:39.522 [2024-11-25 13:22:36.979765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.522 Malloc2 00:23:39.522 Malloc3 00:23:39.522 Malloc4 00:23:39.522 Malloc5 00:23:39.780 Malloc6 00:23:39.780 Malloc7 00:23:39.780 Malloc8 00:23:39.780 Malloc9 00:23:39.780 Malloc10 00:23:39.780 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.780 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:39.780 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.780 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3220426 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3220426 /var/tmp/bdevperf.sock 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3220426 ']' 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.039 { 00:23:40.039 "params": { 00:23:40.039 "name": "Nvme$subsystem", 00:23:40.039 "trtype": "$TEST_TRANSPORT", 00:23:40.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.039 "adrfam": "ipv4", 00:23:40.039 "trsvcid": "$NVMF_PORT", 00:23:40.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.039 "hdgst": ${hdgst:-false}, 00:23:40.039 "ddgst": ${ddgst:-false} 00:23:40.039 }, 00:23:40.039 "method": "bdev_nvme_attach_controller" 00:23:40.039 } 00:23:40.039 EOF 00:23:40.039 )") 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.039 { 00:23:40.039 "params": { 00:23:40.039 "name": "Nvme$subsystem", 00:23:40.039 "trtype": "$TEST_TRANSPORT", 00:23:40.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.039 "adrfam": "ipv4", 00:23:40.039 "trsvcid": "$NVMF_PORT", 00:23:40.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.039 "hdgst": ${hdgst:-false}, 00:23:40.039 "ddgst": ${ddgst:-false} 00:23:40.039 }, 00:23:40.039 "method": "bdev_nvme_attach_controller" 00:23:40.039 } 00:23:40.039 EOF 00:23:40.039 )") 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.039 { 00:23:40.039 "params": { 00:23:40.039 "name": "Nvme$subsystem", 00:23:40.039 "trtype": "$TEST_TRANSPORT", 00:23:40.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.039 "adrfam": "ipv4", 00:23:40.039 "trsvcid": "$NVMF_PORT", 00:23:40.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.039 "hdgst": ${hdgst:-false}, 00:23:40.039 "ddgst": ${ddgst:-false} 00:23:40.039 }, 00:23:40.039 "method": "bdev_nvme_attach_controller" 00:23:40.039 } 00:23:40.039 EOF 00:23:40.039 )") 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.039 { 00:23:40.039 "params": { 00:23:40.039 "name": "Nvme$subsystem", 00:23:40.039 "trtype": "$TEST_TRANSPORT", 00:23:40.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.039 "adrfam": "ipv4", 00:23:40.039 "trsvcid": "$NVMF_PORT", 00:23:40.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.039 "hdgst": ${hdgst:-false}, 00:23:40.039 "ddgst": ${ddgst:-false} 00:23:40.039 }, 00:23:40.039 "method": "bdev_nvme_attach_controller" 00:23:40.039 } 00:23:40.039 EOF 00:23:40.039 )") 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.039 { 00:23:40.039 "params": { 00:23:40.039 "name": "Nvme$subsystem", 00:23:40.039 "trtype": "$TEST_TRANSPORT", 00:23:40.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.039 "adrfam": "ipv4", 00:23:40.039 "trsvcid": "$NVMF_PORT", 00:23:40.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.039 "hdgst": ${hdgst:-false}, 00:23:40.039 "ddgst": ${ddgst:-false} 00:23:40.039 }, 00:23:40.039 "method": "bdev_nvme_attach_controller" 00:23:40.039 } 00:23:40.039 EOF 00:23:40.039 )") 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.039 { 00:23:40.039 "params": { 00:23:40.039 "name": "Nvme$subsystem", 00:23:40.039 "trtype": "$TEST_TRANSPORT", 00:23:40.039 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.039 "adrfam": "ipv4", 00:23:40.039 "trsvcid": "$NVMF_PORT", 00:23:40.039 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.039 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.039 "hdgst": ${hdgst:-false}, 00:23:40.039 "ddgst": ${ddgst:-false} 00:23:40.039 }, 00:23:40.039 "method": "bdev_nvme_attach_controller" 00:23:40.039 } 00:23:40.039 EOF 00:23:40.039 )") 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.039 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.039 { 00:23:40.039 "params": { 00:23:40.039 "name": "Nvme$subsystem", 00:23:40.039 "trtype": "$TEST_TRANSPORT", 00:23:40.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "$NVMF_PORT", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.040 "hdgst": ${hdgst:-false}, 00:23:40.040 "ddgst": ${ddgst:-false} 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 } 00:23:40.040 EOF 00:23:40.040 )") 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.040 { 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme$subsystem", 00:23:40.040 "trtype": "$TEST_TRANSPORT", 00:23:40.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "$NVMF_PORT", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.040 "hdgst": ${hdgst:-false}, 00:23:40.040 "ddgst": ${ddgst:-false} 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 } 00:23:40.040 EOF 00:23:40.040 )") 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.040 { 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme$subsystem", 00:23:40.040 "trtype": "$TEST_TRANSPORT", 00:23:40.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "$NVMF_PORT", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.040 "hdgst": ${hdgst:-false}, 00:23:40.040 "ddgst": ${ddgst:-false} 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 } 00:23:40.040 EOF 00:23:40.040 )") 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:40.040 { 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme$subsystem", 00:23:40.040 "trtype": "$TEST_TRANSPORT", 00:23:40.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "$NVMF_PORT", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.040 "hdgst": ${hdgst:-false}, 00:23:40.040 "ddgst": ${ddgst:-false} 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 } 00:23:40.040 EOF 00:23:40.040 )") 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:40.040 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme1", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme2", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme3", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme4", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme5", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme6", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme7", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme8", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme9", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 },{ 00:23:40.040 "params": { 00:23:40.040 "name": "Nvme10", 00:23:40.040 "trtype": "tcp", 00:23:40.040 "traddr": "10.0.0.2", 00:23:40.040 "adrfam": "ipv4", 00:23:40.040 "trsvcid": "4420", 00:23:40.040 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:40.040 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:40.040 "hdgst": false, 00:23:40.040 "ddgst": false 00:23:40.040 }, 00:23:40.040 "method": "bdev_nvme_attach_controller" 00:23:40.040 }' 00:23:40.040 [2024-11-25 13:22:37.507446] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:40.040 [2024-11-25 13:22:37.507529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220426 ] 00:23:40.040 [2024-11-25 13:22:37.578328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.040 [2024-11-25 13:22:37.638643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.938 Running I/O for 10 seconds... 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.938 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:42.195 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.196 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:42.196 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:42.196 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3220426 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3220426 ']' 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3220426 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3220426 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3220426' 00:23:42.454 killing process with pid 3220426 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3220426 00:23:42.454 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3220426 00:23:42.454 Received shutdown signal, test time was about 0.739869 seconds 00:23:42.454 00:23:42.454 Latency(us) 00:23:42.454 [2024-11-25T12:22:40.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.454 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme1n1 : 0.72 265.43 16.59 0.00 0.00 237012.76 28544.57 222142.77 00:23:42.454 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme2n1 : 0.73 263.08 16.44 0.00 0.00 231825.76 18155.90 253211.69 00:23:42.454 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme3n1 : 0.73 263.86 16.49 0.00 0.00 226573.27 24175.50 229910.00 00:23:42.454 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme4n1 : 0.69 196.36 12.27 0.00 0.00 290630.03 4708.88 253211.69 00:23:42.454 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme5n1 : 0.70 182.77 11.42 0.00 0.00 308347.64 20388.98 246997.90 00:23:42.454 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme6n1 : 0.74 259.83 16.24 0.00 0.00 212325.52 20486.07 239230.67 00:23:42.454 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme7n1 : 0.73 261.29 16.33 0.00 0.00 204880.66 20486.07 251658.24 00:23:42.454 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme8n1 : 0.72 266.12 16.63 0.00 0.00 193308.70 19418.07 274959.93 00:23:42.454 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme9n1 : 0.71 180.55 11.28 0.00 0.00 277217.28 21845.33 270299.59 00:23:42.454 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.454 Verification LBA range: start 0x0 length 0x400 00:23:42.454 Nvme10n1 : 0.71 179.54 11.22 0.00 0.00 269878.23 20388.98 282727.16 00:23:42.454 [2024-11-25T12:22:40.113Z] =================================================================================================================== 00:23:42.454 [2024-11-25T12:22:40.113Z] Total : 2318.82 144.93 0.00 0.00 239091.11 4708.88 282727.16 00:23:42.712 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3220247 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.645 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.645 rmmod nvme_tcp 00:23:43.645 rmmod nvme_fabrics 00:23:43.903 rmmod nvme_keyring 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3220247 ']' 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3220247 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3220247 ']' 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3220247 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3220247 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3220247' 00:23:43.903 killing process with pid 3220247 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3220247 00:23:43.903 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3220247 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.468 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:46.368 00:23:46.368 real 0m7.519s 00:23:46.368 user 0m22.859s 00:23:46.368 sys 0m1.384s 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:46.368 ************************************ 00:23:46.368 END TEST nvmf_shutdown_tc2 00:23:46.368 ************************************ 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:46.368 ************************************ 00:23:46.368 START TEST nvmf_shutdown_tc3 00:23:46.368 ************************************ 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:46.368 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:46.369 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:46.369 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:46.369 Found net devices under 0000:09:00.0: cvl_0_0 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:46.369 Found net devices under 0000:09:00.1: cvl_0_1 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:46.369 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:46.369 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.369 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:46.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:23:46.626 00:23:46.626 --- 10.0.0.2 ping statistics --- 00:23:46.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.626 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:23:46.626 00:23:46.626 --- 10.0.0.1 ping statistics --- 00:23:46.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.626 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3221300 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3221300 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3221300 ']' 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.626 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.626 [2024-11-25 13:22:44.220345] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:46.626 [2024-11-25 13:22:44.220450] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.883 [2024-11-25 13:22:44.298379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.883 [2024-11-25 13:22:44.361721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.883 [2024-11-25 13:22:44.361767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.883 [2024-11-25 13:22:44.361795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.883 [2024-11-25 13:22:44.361807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.883 [2024-11-25 13:22:44.361817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.883 [2024-11-25 13:22:44.363479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.883 [2024-11-25 13:22:44.363552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:46.883 [2024-11-25 13:22:44.363530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.883 [2024-11-25 13:22:44.363555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.883 [2024-11-25 13:22:44.519589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:46.883 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.141 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.141 Malloc1 00:23:47.141 [2024-11-25 13:22:44.630972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.141 Malloc2 00:23:47.141 Malloc3 00:23:47.141 Malloc4 00:23:47.399 Malloc5 00:23:47.399 Malloc6 00:23:47.399 Malloc7 00:23:47.399 Malloc8 00:23:47.399 Malloc9 00:23:47.657 Malloc10 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3221406 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3221406 /var/tmp/bdevperf.sock 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3221406 ']' 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:47.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.657 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.657 { 00:23:47.657 "params": { 00:23:47.657 "name": "Nvme$subsystem", 00:23:47.657 "trtype": "$TEST_TRANSPORT", 00:23:47.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.657 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:47.658 { 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme$subsystem", 00:23:47.658 "trtype": "$TEST_TRANSPORT", 00:23:47.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "$NVMF_PORT", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.658 "hdgst": ${hdgst:-false}, 00:23:47.658 "ddgst": ${ddgst:-false} 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 } 00:23:47.658 EOF 00:23:47.658 )") 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:47.658 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme1", 00:23:47.658 "trtype": "tcp", 00:23:47.658 "traddr": "10.0.0.2", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "4420", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:47.658 "hdgst": false, 00:23:47.658 "ddgst": false 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 },{ 00:23:47.658 "params": { 00:23:47.658 "name": "Nvme2", 00:23:47.658 "trtype": "tcp", 00:23:47.658 "traddr": "10.0.0.2", 00:23:47.658 "adrfam": "ipv4", 00:23:47.658 "trsvcid": "4420", 00:23:47.658 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:47.658 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:47.658 "hdgst": false, 00:23:47.658 "ddgst": false 00:23:47.658 }, 00:23:47.658 "method": "bdev_nvme_attach_controller" 00:23:47.658 },{ 00:23:47.659 "params": { 00:23:47.659 "name": "Nvme3", 00:23:47.659 "trtype": "tcp", 00:23:47.659 "traddr": "10.0.0.2", 00:23:47.659 "adrfam": "ipv4", 00:23:47.659 "trsvcid": "4420", 00:23:47.659 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:47.659 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:47.659 "hdgst": false, 00:23:47.659 "ddgst": false 00:23:47.659 }, 00:23:47.659 "method": "bdev_nvme_attach_controller" 00:23:47.659 },{ 00:23:47.659 "params": { 00:23:47.659 "name": "Nvme4", 00:23:47.659 "trtype": "tcp", 00:23:47.659 "traddr": "10.0.0.2", 00:23:47.659 "adrfam": "ipv4", 00:23:47.659 "trsvcid": "4420", 00:23:47.659 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:47.659 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:47.659 "hdgst": false, 00:23:47.659 "ddgst": false 00:23:47.659 }, 00:23:47.659 "method": "bdev_nvme_attach_controller" 00:23:47.659 },{ 00:23:47.659 "params": { 00:23:47.659 "name": "Nvme5", 00:23:47.659 "trtype": "tcp", 00:23:47.659 "traddr": "10.0.0.2", 00:23:47.659 "adrfam": "ipv4", 00:23:47.659 "trsvcid": "4420", 00:23:47.659 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:47.659 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:47.659 "hdgst": false, 00:23:47.659 "ddgst": false 00:23:47.659 }, 00:23:47.659 "method": "bdev_nvme_attach_controller" 00:23:47.659 },{ 00:23:47.659 "params": { 00:23:47.659 "name": "Nvme6", 00:23:47.659 "trtype": "tcp", 00:23:47.659 "traddr": "10.0.0.2", 00:23:47.659 "adrfam": "ipv4", 00:23:47.659 "trsvcid": "4420", 00:23:47.659 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:47.659 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:47.659 "hdgst": false, 00:23:47.659 "ddgst": false 00:23:47.659 }, 00:23:47.659 "method": "bdev_nvme_attach_controller" 00:23:47.659 },{ 00:23:47.659 "params": { 00:23:47.659 "name": "Nvme7", 00:23:47.659 "trtype": "tcp", 00:23:47.659 "traddr": "10.0.0.2", 00:23:47.659 "adrfam": "ipv4", 00:23:47.659 "trsvcid": "4420", 00:23:47.659 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:47.659 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:47.659 "hdgst": false, 00:23:47.659 "ddgst": false 00:23:47.659 }, 00:23:47.659 "method": "bdev_nvme_attach_controller" 00:23:47.659 },{ 00:23:47.659 "params": { 00:23:47.659 "name": "Nvme8", 00:23:47.659 "trtype": "tcp", 00:23:47.659 "traddr": "10.0.0.2", 00:23:47.659 "adrfam": "ipv4", 00:23:47.659 "trsvcid": "4420", 00:23:47.659 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:47.659 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:47.659 "hdgst": false, 00:23:47.659 "ddgst": false 00:23:47.659 }, 00:23:47.659 "method": "bdev_nvme_attach_controller" 00:23:47.659 },{ 00:23:47.659 "params": { 00:23:47.659 "name": "Nvme9", 00:23:47.659 "trtype": "tcp", 00:23:47.659 "traddr": "10.0.0.2", 00:23:47.659 "adrfam": "ipv4", 00:23:47.659 "trsvcid": "4420", 00:23:47.659 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:47.659 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:47.659 "hdgst": false, 00:23:47.659 "ddgst": false 00:23:47.659 }, 00:23:47.659 "method": "bdev_nvme_attach_controller" 00:23:47.659 },{ 00:23:47.659 "params": { 00:23:47.659 "name": "Nvme10", 00:23:47.659 "trtype": "tcp", 00:23:47.659 "traddr": "10.0.0.2", 00:23:47.659 "adrfam": "ipv4", 00:23:47.659 "trsvcid": "4420", 00:23:47.659 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:47.659 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:47.659 "hdgst": false, 00:23:47.659 "ddgst": false 00:23:47.659 }, 00:23:47.659 "method": "bdev_nvme_attach_controller" 00:23:47.659 }' 00:23:47.659 [2024-11-25 13:22:45.169009] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:47.659 [2024-11-25 13:22:45.169086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221406 ] 00:23:47.659 [2024-11-25 13:22:45.241706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.659 [2024-11-25 13:22:45.302992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.557 Running I/O for 10 seconds... 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:49.815 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:50.073 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3221300 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3221300 ']' 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3221300 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3221300 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.351 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.352 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3221300' 00:23:50.352 killing process with pid 3221300 00:23:50.352 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3221300 00:23:50.352 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3221300 00:23:50.352 [2024-11-25 13:22:47.925169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.925997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.926011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.926023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd31b0 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.352 [2024-11-25 13:22:47.927784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.927991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.928385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5d80 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.929995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.353 [2024-11-25 13:22:47.930144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.930505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd36a0 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.354 [2024-11-25 13:22:47.932980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.932992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.933003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.933015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.933027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.933038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.933050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.933061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.933073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.933084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd3b70 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.934992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.935160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4060 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.355 [2024-11-25 13:22:47.936221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.936849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4530 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.937973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.937999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.356 [2024-11-25 13:22:47.938195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.938741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd4a00 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.940999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.941011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.941022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.941033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.941045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.941056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.357 [2024-11-25 13:22:47.941067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-25 13:22:47.941240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 he state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with t[2024-11-25 13:22:47.941267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:23:50.358 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with t[2024-11-25 13:22:47.941340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(6) to be set 00:23:50.358 id:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd53c0 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a290 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc110 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb7970 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.941932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2870 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.941978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.941998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.942013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.942026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.942040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.942054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.942068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.942081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.942093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb7b50 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.942130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with t[2024-11-25 13:22:47.942139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nshe state(6) to be set 00:23:50.358 id:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.942164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.942165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.942180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with t[2024-11-25 13:22:47.942180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(6) to be set 00:23:50.358 id:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.942195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.942197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.942208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.942212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.942220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.942226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.358 [2024-11-25 13:22:47.942233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.358 [2024-11-25 13:22:47.942241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.358 [2024-11-25 13:22:47.942245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b210 is same w[2024-11-25 13:22:47.942269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with tith the state(6) to be set 00:23:50.359 he state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 [2024-11-25 13:22:47.942339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with t[2024-11-25 13:22:47.942353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:23:50.359 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-25 13:22:47.942371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 he state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-25 13:22:47.942409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 he state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with t[2024-11-25 13:22:47.942424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:23:50.359 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 [2024-11-25 13:22:47.942450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889a40 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 [2024-11-25 13:22:47.942521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-25 13:22:47.942533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 he state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 [2024-11-25 13:22:47.942563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 [2024-11-25 13:22:47.942589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 [2024-11-25 13:22:47.942619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5910 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-25 13:22:47.942693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 he state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 [2024-11-25 13:22:47.942732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-25 13:22:47.942756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with tid:0 cdw10:00000000 cdw11:00000000 00:23:50.359 he state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-25 13:22:47.942770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 he state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with t[2024-11-25 13:22:47.942785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(6) to be set 00:23:50.359 id:0 cdw10:00000000 cdw11:00000000 00:23:50.359 [2024-11-25 13:22:47.942802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with t[2024-11-25 13:22:47.942803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(6) to be set 00:23:50.359 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with t[2024-11-25 13:22:47.942819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895000 is same whe state(6) to be set 00:23:50.359 ith the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with t[2024-11-25 13:22:47.942877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:12he state(6) to be set 00:23:50.359 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.359 [2024-11-25 13:22:47.942895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.359 [2024-11-25 13:22:47.942931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.359 [2024-11-25 13:22:47.942940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.359 [2024-11-25 13:22:47.942943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.360 [2024-11-25 13:22:47.942955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.360 [2024-11-25 13:22:47.942956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.942967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.360 [2024-11-25 13:22:47.942971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.942979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.360 [2024-11-25 13:22:47.942987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.942991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with the state(6) to be set 00:23:50.360 [2024-11-25 13:22:47.943002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-25 13:22:47.943003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd5890 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 he state(6) to be set 00:23:50.360 [2024-11-25 13:22:47.943022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.360 [2024-11-25 13:22:47.943937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.360 [2024-11-25 13:22:47.943952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.943967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.943982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.943996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.944822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.944835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.947212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:50.361 [2024-11-25 13:22:47.947262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895000 (9): Bad file descriptor 00:23:50.361 [2024-11-25 13:22:47.948845] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:50.361 [2024-11-25 13:22:47.948924] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:50.361 [2024-11-25 13:22:47.948997] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:50.361 [2024-11-25 13:22:47.949064] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:50.361 [2024-11-25 13:22:47.949139] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:50.361 [2024-11-25 13:22:47.949365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.949391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.949414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.361 [2024-11-25 13:22:47.949430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.361 [2024-11-25 13:22:47.949446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.949460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.949476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.949491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.949507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.949521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.949536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc98f60 is same with the state(6) to be set 00:23:50.362 [2024-11-25 13:22:47.949789] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:50.362 [2024-11-25 13:22:47.949943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.362 [2024-11-25 13:22:47.949972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x895000 with addr=10.0.0.2, port=4420 00:23:50.362 [2024-11-25 13:22:47.949989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895000 is same with the state(6) to be set 00:23:50.362 [2024-11-25 13:22:47.951001] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:50.362 [2024-11-25 13:22:47.951076] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:50.362 [2024-11-25 13:22:47.951120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:50.362 [2024-11-25 13:22:47.951160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb7b50 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.951187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895000 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.951325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:50.362 [2024-11-25 13:22:47.951347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:50.362 [2024-11-25 13:22:47.951363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:50.362 [2024-11-25 13:22:47.951379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:50.362 [2024-11-25 13:22:47.951401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88a290 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.951434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fc110 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.951466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb7970 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.951522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.362 [2024-11-25 13:22:47.951551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.951572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.362 [2024-11-25 13:22:47.951596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.951610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.362 [2024-11-25 13:22:47.951624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.951638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.362 [2024-11-25 13:22:47.951651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.951664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2650 is same with the state(6) to be set 00:23:50.362 [2024-11-25 13:22:47.951695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce2870 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.951729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88b210 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.951758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889a40 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.951787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb5910 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.952260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.362 [2024-11-25 13:22:47.952296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb7b50 with addr=10.0.0.2, port=4420 00:23:50.362 [2024-11-25 13:22:47.952322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb7b50 is same with the state(6) to be set 00:23:50.362 [2024-11-25 13:22:47.952397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb7b50 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.952472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:50.362 [2024-11-25 13:22:47.952491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:50.362 [2024-11-25 13:22:47.952505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:50.362 [2024-11-25 13:22:47.952518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:50.362 [2024-11-25 13:22:47.958641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:50.362 [2024-11-25 13:22:47.958910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.362 [2024-11-25 13:22:47.958942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x895000 with addr=10.0.0.2, port=4420 00:23:50.362 [2024-11-25 13:22:47.958959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895000 is same with the state(6) to be set 00:23:50.362 [2024-11-25 13:22:47.959023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895000 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.959087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:50.362 [2024-11-25 13:22:47.959104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:50.362 [2024-11-25 13:22:47.959131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:50.362 [2024-11-25 13:22:47.959147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:50.362 [2024-11-25 13:22:47.961387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce2650 (9): Bad file descriptor 00:23:50.362 [2024-11-25 13:22:47.961590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.961981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.961996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.962012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.962026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.362 [2024-11-25 13:22:47.962042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.362 [2024-11-25 13:22:47.962056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.962974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.962990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.963004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.963020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.963034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.963050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.963064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.963080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.963093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.363 [2024-11-25 13:22:47.963113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.363 [2024-11-25 13:22:47.963129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.963562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.963577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98280 is same with the state(6) to be set 00:23:50.364 [2024-11-25 13:22:47.964894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.964918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.964939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.964954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.964969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.964983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.964999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.364 [2024-11-25 13:22:47.965631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.364 [2024-11-25 13:22:47.965647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.965981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.965995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.365 [2024-11-25 13:22:47.966500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.365 [2024-11-25 13:22:47.966514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.966844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.966859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa99580 is same with the state(6) to be set 00:23:50.366 [2024-11-25 13:22:47.968155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.366 [2024-11-25 13:22:47.968964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.366 [2024-11-25 13:22:47.968977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.968993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.969971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.969987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.970001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.970016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.367 [2024-11-25 13:22:47.970030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.367 [2024-11-25 13:22:47.970045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.970059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.970075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.970089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.970103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc94fb0 is same with the state(6) to be set 00:23:50.368 [2024-11-25 13:22:47.971377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.971977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.971993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.368 [2024-11-25 13:22:47.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.368 [2024-11-25 13:22:47.972516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.972974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.972990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.973310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.973326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96490 is same with the state(6) to be set 00:23:50.369 [2024-11-25 13:22:47.974596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.974618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.974640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.974655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.974671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.974684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.974700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.974714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.974735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.974750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.974766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.974780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.974796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.974810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.974826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.369 [2024-11-25 13:22:47.974840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.369 [2024-11-25 13:22:47.974856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.974869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.974886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.974900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.974915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.974928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.974944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.974958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.974974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.974987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.370 [2024-11-25 13:22:47.975880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.370 [2024-11-25 13:22:47.975895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.975911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.975925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.975940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.975954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.975969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.975983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.975998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.976520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.976535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97970 is same with the state(6) to be set 00:23:50.371 [2024-11-25 13:22:47.977832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.977855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.977876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.977891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.977907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.977928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.977944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.977958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.977973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.977987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.371 [2024-11-25 13:22:47.978369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.371 [2024-11-25 13:22:47.978383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.978973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.978989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.372 [2024-11-25 13:22:47.979521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.372 [2024-11-25 13:22:47.979537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.979551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.979566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.979579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.979595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.979608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.979624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.979638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.979654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.979667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.979683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.979697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.979712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.979726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.979741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.979755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.979769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9a550 is same with the state(6) to be set 00:23:50.373 [2024-11-25 13:22:47.981076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.981978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.981993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.982007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.982022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.373 [2024-11-25 13:22:47.982036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.373 [2024-11-25 13:22:47.982052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.982976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.982990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.983005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.374 [2024-11-25 13:22:47.983022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.374 [2024-11-25 13:22:47.983037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1de60 is same with the state(6) to be set 00:23:50.374 [2024-11-25 13:22:47.984295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:50.374 [2024-11-25 13:22:47.984334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:50.374 [2024-11-25 13:22:47.984354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:50.374 [2024-11-25 13:22:47.984371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:50.374 [2024-11-25 13:22:47.984480] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:50.375 [2024-11-25 13:22:47.984508] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:50.375 [2024-11-25 13:22:47.984536] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:50.375 [2024-11-25 13:22:47.984561] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:50.636 [2024-11-25 13:22:48.000361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:50.636 [2024-11-25 13:22:48.000442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:50.636 [2024-11-25 13:22:48.000462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:50.636 [2024-11-25 13:22:48.000480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:50.636 [2024-11-25 13:22:48.000738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.636 [2024-11-25 13:22:48.000775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb7b50 with addr=10.0.0.2, port=4420 00:23:50.636 [2024-11-25 13:22:48.000794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb7b50 is same with the state(6) to be set 00:23:50.636 [2024-11-25 13:22:48.000886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.636 [2024-11-25 13:22:48.000912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88b210 with addr=10.0.0.2, port=4420 00:23:50.636 [2024-11-25 13:22:48.000928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b210 is same with the state(6) to be set 00:23:50.636 [2024-11-25 13:22:48.001016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.636 [2024-11-25 13:22:48.001040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889a40 with addr=10.0.0.2, port=4420 00:23:50.636 [2024-11-25 13:22:48.001056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889a40 is same with the state(6) to be set 00:23:50.636 [2024-11-25 13:22:48.001145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.636 [2024-11-25 13:22:48.001173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88a290 with addr=10.0.0.2, port=4420 00:23:50.636 [2024-11-25 13:22:48.001188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a290 is same with the state(6) to be set 00:23:50.636 [2024-11-25 13:22:48.001232] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:50.636 [2024-11-25 13:22:48.001259] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:50.636 [2024-11-25 13:22:48.001310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88a290 (9): Bad file descriptor 00:23:50.636 [2024-11-25 13:22:48.001341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889a40 (9): Bad file descriptor 00:23:50.636 [2024-11-25 13:22:48.001364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88b210 (9): Bad file descriptor 00:23:50.636 [2024-11-25 13:22:48.001386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb7b50 (9): Bad file descriptor 00:23:50.636 task offset: 24448 on job bdev=Nvme1n1 fails 00:23:50.636 1740.29 IOPS, 108.77 MiB/s [2024-11-25T12:22:48.295Z] [2024-11-25 13:22:48.003465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.636 [2024-11-25 13:22:48.003496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb5910 with addr=10.0.0.2, port=4420 00:23:50.636 [2024-11-25 13:22:48.003513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5910 is same with the state(6) to be set 00:23:50.636 [2024-11-25 13:22:48.003611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.636 [2024-11-25 13:22:48.003636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc110 with addr=10.0.0.2, port=4420 00:23:50.636 [2024-11-25 13:22:48.003651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc110 is same with the state(6) to be set 00:23:50.636 [2024-11-25 13:22:48.003735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.636 [2024-11-25 13:22:48.003760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb7970 with addr=10.0.0.2, port=4420 00:23:50.636 [2024-11-25 13:22:48.003775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb7970 is same with the state(6) to be set 00:23:50.636 [2024-11-25 13:22:48.003869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.636 [2024-11-25 13:22:48.003894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce2870 with addr=10.0.0.2, port=4420 00:23:50.636 [2024-11-25 13:22:48.003909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2870 is same with the state(6) to be set 00:23:50.636 [2024-11-25 13:22:48.004037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.636 [2024-11-25 13:22:48.004505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.636 [2024-11-25 13:22:48.004520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.004976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.004990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.637 [2024-11-25 13:22:48.005715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.637 [2024-11-25 13:22:48.005729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.005759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.005792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.005825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.005855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.005886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.005916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.005948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.005978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.005994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.006008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.006024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.638 [2024-11-25 13:22:48.006039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.638 [2024-11-25 13:22:48.006053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc9ba90 is same with the state(6) to be set 00:23:50.638 [2024-11-25 13:22:48.008134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:50.638 00:23:50.638 Latency(us) 00:23:50.638 [2024-11-25T12:22:48.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.638 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme1n1 ended in about 0.98 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme1n1 : 0.98 195.65 12.23 65.56 0.00 242453.07 5558.42 243891.01 00:23:50.638 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme2n1 ended in about 0.99 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme2n1 : 0.99 193.11 12.07 64.37 0.00 241459.96 19515.16 259425.47 00:23:50.638 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme3n1 ended in about 1.00 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme3n1 : 1.00 192.48 12.03 64.16 0.00 237634.37 16311.18 260978.92 00:23:50.638 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme4n1 ended in about 1.00 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme4n1 : 1.00 191.85 11.99 63.95 0.00 233905.68 18641.35 259425.47 00:23:50.638 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme5n1 ended in about 1.00 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme5n1 : 1.00 127.49 7.97 63.75 0.00 306943.87 24466.77 279620.27 00:23:50.638 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme6n1 ended in about 1.01 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme6n1 : 1.01 195.60 12.22 63.54 0.00 222160.34 12330.48 262532.36 00:23:50.638 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme7n1 ended in about 0.98 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme7n1 : 0.98 195.81 12.24 5.10 0.00 279439.15 18932.62 264085.81 00:23:50.638 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme8n1 ended in about 1.01 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme8n1 : 1.01 190.02 11.88 63.34 0.00 218462.81 32428.18 254765.13 00:23:50.638 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme9n1 ended in about 1.04 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme9n1 : 1.04 127.33 7.96 61.74 0.00 288219.95 23981.32 295154.73 00:23:50.638 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:50.638 Job: Nvme10n1 ended in about 1.01 seconds with error 00:23:50.638 Verification LBA range: start 0x0 length 0x400 00:23:50.638 Nvme10n1 : 1.01 126.27 7.89 63.14 0.00 280660.70 20971.52 267192.70 00:23:50.638 [2024-11-25T12:22:48.297Z] =================================================================================================================== 00:23:50.638 [2024-11-25T12:22:48.297Z] Total : 1735.62 108.48 578.64 0.00 251456.44 5558.42 295154.73 00:23:50.638 [2024-11-25 13:22:48.035066] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:50.638 [2024-11-25 13:22:48.035142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:50.638 [2024-11-25 13:22:48.035227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb5910 (9): Bad file descriptor 00:23:50.638 [2024-11-25 13:22:48.035256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fc110 (9): Bad file descriptor 00:23:50.638 [2024-11-25 13:22:48.035275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb7970 (9): Bad file descriptor 00:23:50.638 [2024-11-25 13:22:48.035313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce2870 (9): Bad file descriptor 00:23:50.638 [2024-11-25 13:22:48.035332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:50.638 [2024-11-25 13:22:48.035345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:50.638 [2024-11-25 13:22:48.035361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:50.638 [2024-11-25 13:22:48.035378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:50.638 [2024-11-25 13:22:48.035395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:50.638 [2024-11-25 13:22:48.035408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:50.638 [2024-11-25 13:22:48.035421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:50.638 [2024-11-25 13:22:48.035446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:50.638 [2024-11-25 13:22:48.035461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:50.638 [2024-11-25 13:22:48.035474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:50.638 [2024-11-25 13:22:48.035487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:50.638 [2024-11-25 13:22:48.035500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:50.638 [2024-11-25 13:22:48.035514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:50.638 [2024-11-25 13:22:48.035526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:50.638 [2024-11-25 13:22:48.035539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:50.638 [2024-11-25 13:22:48.035552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:50.638 [2024-11-25 13:22:48.035930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.638 [2024-11-25 13:22:48.035963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x895000 with addr=10.0.0.2, port=4420 00:23:50.638 [2024-11-25 13:22:48.035982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x895000 is same with the state(6) to be set 00:23:50.638 [2024-11-25 13:22:48.036076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.638 [2024-11-25 13:22:48.036101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce2650 with addr=10.0.0.2, port=4420 00:23:50.638 [2024-11-25 13:22:48.036116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2650 is same with the state(6) to be set 00:23:50.638 [2024-11-25 13:22:48.036131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:50.638 [2024-11-25 13:22:48.036144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:50.638 [2024-11-25 13:22:48.036157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:50.638 [2024-11-25 13:22:48.036171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:50.638 [2024-11-25 13:22:48.036186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:50.638 [2024-11-25 13:22:48.036198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:50.638 [2024-11-25 13:22:48.036211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:50.638 [2024-11-25 13:22:48.036223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.036238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.036250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.036262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.036274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.036287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.036310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.036331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.036344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.036788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x895000 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.036819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce2650 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.037175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:50.639 [2024-11-25 13:22:48.037205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:50.639 [2024-11-25 13:22:48.037222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:50.639 [2024-11-25 13:22:48.037237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:50.639 [2024-11-25 13:22:48.037253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:50.639 [2024-11-25 13:22:48.037269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:50.639 [2024-11-25 13:22:48.037341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.037359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.037372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.037385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.037399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.037412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.037424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.037436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.037474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:50.639 [2024-11-25 13:22:48.037494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:50.639 [2024-11-25 13:22:48.037625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.639 [2024-11-25 13:22:48.037654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88a290 with addr=10.0.0.2, port=4420 00:23:50.639 [2024-11-25 13:22:48.037670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88a290 is same with the state(6) to be set 00:23:50.639 [2024-11-25 13:22:48.037773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.639 [2024-11-25 13:22:48.037797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x889a40 with addr=10.0.0.2, port=4420 00:23:50.639 [2024-11-25 13:22:48.037813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x889a40 is same with the state(6) to be set 00:23:50.639 [2024-11-25 13:22:48.037895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.639 [2024-11-25 13:22:48.037921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88b210 with addr=10.0.0.2, port=4420 00:23:50.639 [2024-11-25 13:22:48.037937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88b210 is same with the state(6) to be set 00:23:50.639 [2024-11-25 13:22:48.038013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.639 [2024-11-25 13:22:48.038044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb7b50 with addr=10.0.0.2, port=4420 00:23:50.639 [2024-11-25 13:22:48.038060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb7b50 is same with the state(6) to be set 00:23:50.639 [2024-11-25 13:22:48.038142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.639 [2024-11-25 13:22:48.038167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce2870 with addr=10.0.0.2, port=4420 00:23:50.639 [2024-11-25 13:22:48.038182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce2870 is same with the state(6) to be set 00:23:50.639 [2024-11-25 13:22:48.038274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.639 [2024-11-25 13:22:48.038299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb7970 with addr=10.0.0.2, port=4420 00:23:50.639 [2024-11-25 13:22:48.038325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb7970 is same with the state(6) to be set 00:23:50.639 [2024-11-25 13:22:48.038432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.639 [2024-11-25 13:22:48.038459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc110 with addr=10.0.0.2, port=4420 00:23:50.639 [2024-11-25 13:22:48.038474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc110 is same with the state(6) to be set 00:23:50.639 [2024-11-25 13:22:48.038576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:50.639 [2024-11-25 13:22:48.038605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcb5910 with addr=10.0.0.2, port=4420 00:23:50.639 [2024-11-25 13:22:48.038620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb5910 is same with the state(6) to be set 00:23:50.639 [2024-11-25 13:22:48.038638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88a290 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.038657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x889a40 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.038675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88b210 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.038692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb7b50 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.038708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce2870 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.038725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb7970 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.038768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fc110 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.038790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcb5910 (9): Bad file descriptor 00:23:50.639 [2024-11-25 13:22:48.038806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.038819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.038831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.038844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.038858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.038870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.038883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.038899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.038914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.038927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.038939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.038950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.038963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.038975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.038987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.038999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.039012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.039024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.039035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.039048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.039061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.039073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.039085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.039096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.039132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.039149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.039162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:50.639 [2024-11-25 13:22:48.039174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:50.639 [2024-11-25 13:22:48.039187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:50.639 [2024-11-25 13:22:48.039200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:50.639 [2024-11-25 13:22:48.039212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:50.640 [2024-11-25 13:22:48.039224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:50.899 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3221406 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3221406 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3221406 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:51.836 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:51.836 rmmod nvme_tcp 00:23:51.836 rmmod nvme_fabrics 00:23:51.836 rmmod nvme_keyring 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3221300 ']' 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3221300 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3221300 ']' 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3221300 00:23:52.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3221300) - No such process 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3221300 is not found' 00:23:52.094 Process with pid 3221300 is not found 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.094 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.001 00:23:54.001 real 0m7.590s 00:23:54.001 user 0m18.904s 00:23:54.001 sys 0m1.543s 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:54.001 ************************************ 00:23:54.001 END TEST nvmf_shutdown_tc3 00:23:54.001 ************************************ 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:54.001 ************************************ 00:23:54.001 START TEST nvmf_shutdown_tc4 00:23:54.001 ************************************ 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.001 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:54.002 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:54.002 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:54.002 Found net devices under 0000:09:00.0: cvl_0_0 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:54.002 Found net devices under 0000:09:00.1: cvl_0_1 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.002 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.003 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.003 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:23:54.261 00:23:54.261 --- 10.0.0.2 ping statistics --- 00:23:54.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.261 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:54.261 00:23:54.261 --- 10.0.0.1 ping statistics --- 00:23:54.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.261 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3222315 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3222315 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3222315 ']' 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.261 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.261 [2024-11-25 13:22:51.869401] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:23:54.261 [2024-11-25 13:22:51.869486] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.519 [2024-11-25 13:22:51.942740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:54.519 [2024-11-25 13:22:51.997742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.519 [2024-11-25 13:22:51.997796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.519 [2024-11-25 13:22:51.997825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.519 [2024-11-25 13:22:51.997836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.519 [2024-11-25 13:22:51.997845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.519 [2024-11-25 13:22:51.999297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.519 [2024-11-25 13:22:51.999424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:54.519 [2024-11-25 13:22:51.999490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:54.519 [2024-11-25 13:22:51.999494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.519 [2024-11-25 13:22:52.151122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.519 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.520 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.778 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:54.778 Malloc1 00:23:54.778 [2024-11-25 13:22:52.258664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.778 Malloc2 00:23:54.778 Malloc3 00:23:54.778 Malloc4 00:23:54.778 Malloc5 00:23:55.037 Malloc6 00:23:55.037 Malloc7 00:23:55.037 Malloc8 00:23:55.037 Malloc9 00:23:55.037 Malloc10 00:23:55.295 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.295 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:55.295 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.295 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:55.295 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3222488 00:23:55.295 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:55.295 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:55.295 [2024-11-25 13:22:52.802914] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3222315 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3222315 ']' 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3222315 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3222315 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3222315' 00:24:00.564 killing process with pid 3222315 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3222315 00:24:00.564 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3222315 00:24:00.564 [2024-11-25 13:22:57.792393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbbf0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.792499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbbf0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.792517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbbf0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.792530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbbf0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.792543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbbf0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.792555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbbf0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.792567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbbf0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.792579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbbf0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dec10 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.794972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.564 [2024-11-25 13:22:57.795182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.795194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.795206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.795217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.795229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8df0e0 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.796956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8de740 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.804127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b06b0 is same with the state(6) to be set 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 [2024-11-25 13:22:57.805943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.565 NVMe io qpair process completion error 00:24:00.565 [2024-11-25 13:22:57.807259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae210 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.807293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae210 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.807319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae210 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.807336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae210 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.807349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae210 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.807362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae210 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.808116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae700 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.808149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae700 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.808164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae700 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.808178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae700 is same with the state(6) to be set 00:24:00.565 [2024-11-25 13:22:57.808190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae700 is same with the state(6) to be set 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 [2024-11-25 13:22:57.810733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 Write completed with error (sct=0, sc=8) 00:24:00.565 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 [2024-11-25 13:22:57.811799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 [2024-11-25 13:22:57.812994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 Write completed with error (sct=0, sc=8) 00:24:00.566 starting I/O failed: -6 00:24:00.566 [2024-11-25 13:22:57.814767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.567 NVMe io qpair process completion error 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 [2024-11-25 13:22:57.816094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 [2024-11-25 13:22:57.817182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 [2024-11-25 13:22:57.817334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1f20 is same with the state(6) to be set 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 [2024-11-25 13:22:57.817383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1f20 is same with the state(6) to be set 00:24:00.567 starting I/O failed: -6 00:24:00.567 [2024-11-25 13:22:57.817398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1f20 is same with the state(6) to be set 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 [2024-11-25 13:22:57.817410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1f20 is same with the state(6) to be set 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 [2024-11-25 13:22:57.817422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1f20 is same with the state(6) to be set 00:24:00.567 starting I/O failed: -6 00:24:00.567 [2024-11-25 13:22:57.817440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1f20 is same with the state(6) to be set 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 [2024-11-25 13:22:57.817452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1f20 is same with the state(6) to be set 00:24:00.567 starting I/O failed: -6 00:24:00.567 [2024-11-25 13:22:57.817464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b1f20 is same with the state(6) to be set 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 starting I/O failed: -6 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.567 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 [2024-11-25 13:22:57.818347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 [2024-11-25 13:22:57.820342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.568 NVMe io qpair process completion error 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 [2024-11-25 13:22:57.821597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.568 starting I/O failed: -6 00:24:00.568 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 [2024-11-25 13:22:57.822616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 [2024-11-25 13:22:57.823863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.569 Write completed with error (sct=0, sc=8) 00:24:00.569 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 [2024-11-25 13:22:57.825671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.570 NVMe io qpair process completion error 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 [2024-11-25 13:22:57.827116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.570 starting I/O failed: -6 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 [2024-11-25 13:22:57.828103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 Write completed with error (sct=0, sc=8) 00:24:00.570 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 [2024-11-25 13:22:57.829294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 [2024-11-25 13:22:57.831785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.571 NVMe io qpair process completion error 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.571 starting I/O failed: -6 00:24:00.571 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 [2024-11-25 13:22:57.833879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 [2024-11-25 13:22:57.835055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.572 Write completed with error (sct=0, sc=8) 00:24:00.572 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 [2024-11-25 13:22:57.838744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.573 NVMe io qpair process completion error 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 [2024-11-25 13:22:57.840185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.573 starting I/O failed: -6 00:24:00.573 starting I/O failed: -6 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 [2024-11-25 13:22:57.841357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.573 starting I/O failed: -6 00:24:00.573 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 [2024-11-25 13:22:57.842539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 [2024-11-25 13:22:57.846897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.574 NVMe io qpair process completion error 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.574 starting I/O failed: -6 00:24:00.574 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 [2024-11-25 13:22:57.848237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 [2024-11-25 13:22:57.849389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 [2024-11-25 13:22:57.850528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.575 Write completed with error (sct=0, sc=8) 00:24:00.575 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 [2024-11-25 13:22:57.852231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.576 NVMe io qpair process completion error 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 [2024-11-25 13:22:57.853405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 Write completed with error (sct=0, sc=8) 00:24:00.576 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 [2024-11-25 13:22:57.854425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 [2024-11-25 13:22:57.855615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.577 Write completed with error (sct=0, sc=8) 00:24:00.577 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 [2024-11-25 13:22:57.857453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.578 NVMe io qpair process completion error 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 starting I/O failed: -6 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.578 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 [2024-11-25 13:22:57.861252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:00.579 starting I/O failed: -6 00:24:00.579 starting I/O failed: -6 00:24:00.579 starting I/O failed: -6 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 [2024-11-25 13:22:57.862483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 [2024-11-25 13:22:57.863609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.579 Write completed with error (sct=0, sc=8) 00:24:00.579 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 Write completed with error (sct=0, sc=8) 00:24:00.580 starting I/O failed: -6 00:24:00.580 [2024-11-25 13:22:57.867400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:00.580 NVMe io qpair process completion error 00:24:00.580 Initializing NVMe Controllers 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:00.580 Controller IO queue size 128, less than required. 00:24:00.580 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:00.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:00.580 Initialization complete. Launching workers. 00:24:00.580 ======================================================== 00:24:00.580 Latency(us) 00:24:00.580 Device Information : IOPS MiB/s Average min max 00:24:00.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1816.74 78.06 70477.02 1143.41 134427.79 00:24:00.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1805.37 77.57 70942.46 1018.71 123735.54 00:24:00.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1788.74 76.86 71548.93 625.04 139620.58 00:24:00.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1796.11 77.18 71317.67 818.63 150844.11 00:24:00.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1783.90 76.65 70987.17 1129.78 117210.31 00:24:00.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1786.00 76.74 70928.18 962.68 117843.85 00:24:00.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1736.32 74.61 72985.38 1013.32 119441.65 00:24:00.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1795.90 77.17 70594.80 776.58 118434.37 00:24:00.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1819.26 78.17 69715.19 1087.58 117112.58 00:24:00.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1826.63 78.49 69491.15 829.37 129665.78 00:24:00.581 ======================================================== 00:24:00.581 Total : 17954.97 771.50 70887.13 625.04 150844.11 00:24:00.581 00:24:00.581 [2024-11-25 13:22:57.873706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c629e0 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.873809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c632c0 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.873873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c626b0 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.873940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c63920 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.874003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c64ae0 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.874066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c64720 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.874129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c635f0 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.874188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c63c50 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.874245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c64900 is same with the state(6) to be set 00:24:00.581 [2024-11-25 13:22:57.874310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c62d10 is same with the state(6) to be set 00:24:00.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:00.839 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3222488 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3222488 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3222488 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:01.774 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.775 rmmod nvme_tcp 00:24:01.775 rmmod nvme_fabrics 00:24:01.775 rmmod nvme_keyring 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3222315 ']' 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3222315 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3222315 ']' 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3222315 00:24:01.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3222315) - No such process 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3222315 is not found' 00:24:01.775 Process with pid 3222315 is not found 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.775 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.333 00:24:04.333 real 0m9.816s 00:24:04.333 user 0m23.873s 00:24:04.333 sys 0m5.759s 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:04.333 ************************************ 00:24:04.333 END TEST nvmf_shutdown_tc4 00:24:04.333 ************************************ 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:04.333 00:24:04.333 real 0m37.256s 00:24:04.333 user 1m40.822s 00:24:04.333 sys 0m12.115s 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:04.333 ************************************ 00:24:04.333 END TEST nvmf_shutdown 00:24:04.333 ************************************ 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:04.333 ************************************ 00:24:04.333 START TEST nvmf_nsid 00:24:04.333 ************************************ 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:04.333 * Looking for test storage... 00:24:04.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.333 --rc genhtml_branch_coverage=1 00:24:04.333 --rc genhtml_function_coverage=1 00:24:04.333 --rc genhtml_legend=1 00:24:04.333 --rc geninfo_all_blocks=1 00:24:04.333 --rc geninfo_unexecuted_blocks=1 00:24:04.333 00:24:04.333 ' 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.333 --rc genhtml_branch_coverage=1 00:24:04.333 --rc genhtml_function_coverage=1 00:24:04.333 --rc genhtml_legend=1 00:24:04.333 --rc geninfo_all_blocks=1 00:24:04.333 --rc geninfo_unexecuted_blocks=1 00:24:04.333 00:24:04.333 ' 00:24:04.333 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.333 --rc genhtml_branch_coverage=1 00:24:04.333 --rc genhtml_function_coverage=1 00:24:04.333 --rc genhtml_legend=1 00:24:04.333 --rc geninfo_all_blocks=1 00:24:04.333 --rc geninfo_unexecuted_blocks=1 00:24:04.333 00:24:04.333 ' 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.334 --rc genhtml_branch_coverage=1 00:24:04.334 --rc genhtml_function_coverage=1 00:24:04.334 --rc genhtml_legend=1 00:24:04.334 --rc geninfo_all_blocks=1 00:24:04.334 --rc geninfo_unexecuted_blocks=1 00:24:04.334 00:24:04.334 ' 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.334 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:06.236 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.236 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:06.237 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:06.237 Found net devices under 0000:09:00.0: cvl_0_0 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:06.237 Found net devices under 0000:09:00.1: cvl_0_1 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.237 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:24:06.496 00:24:06.496 --- 10.0.0.2 ping statistics --- 00:24:06.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.496 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:24:06.496 00:24:06.496 --- 10.0.0.1 ping statistics --- 00:24:06.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.496 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3225232 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3225232 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3225232 ']' 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.496 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:06.496 [2024-11-25 13:23:03.980664] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:06.496 [2024-11-25 13:23:03.980751] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.496 [2024-11-25 13:23:04.050984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.496 [2024-11-25 13:23:04.107255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.496 [2024-11-25 13:23:04.107329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.496 [2024-11-25 13:23:04.107345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.496 [2024-11-25 13:23:04.107357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.496 [2024-11-25 13:23:04.107366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.496 [2024-11-25 13:23:04.107986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3225258 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8d701045-fcdf-47a4-8504-fa6738aecf0a 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fea5d8a1-14c9-4b83-beba-e98e76e7e924 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=7b4c0e79-0749-467a-a233-1af8cfae84ce 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:06.755 null0 00:24:06.755 null1 00:24:06.755 null2 00:24:06.755 [2024-11-25 13:23:04.301690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.755 [2024-11-25 13:23:04.309837] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:06.755 [2024-11-25 13:23:04.309909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3225258 ] 00:24:06.755 [2024-11-25 13:23:04.325868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3225258 /var/tmp/tgt2.sock 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3225258 ']' 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:06.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.755 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:06.755 [2024-11-25 13:23:04.378475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.013 [2024-11-25 13:23:04.439010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.272 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.272 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:07.272 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:07.529 [2024-11-25 13:23:05.149525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.529 [2024-11-25 13:23:05.165764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:07.814 nvme0n1 nvme0n2 00:24:07.814 nvme1n1 00:24:07.814 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:07.814 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:07.814 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:08.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8d701045-fcdf-47a4-8504-fa6738aecf0a 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8d701045fcdf47a48504fa6738aecf0a 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8D701045FCDF47A48504FA6738AECF0A 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8D701045FCDF47A48504FA6738AECF0A == \8\D\7\0\1\0\4\5\F\C\D\F\4\7\A\4\8\5\0\4\F\A\6\7\3\8\A\E\C\F\0\A ]] 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fea5d8a1-14c9-4b83-beba-e98e76e7e924 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fea5d8a114c94b83bebae98e76e7e924 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FEA5D8A114C94B83BEBAE98E76E7E924 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FEA5D8A114C94B83BEBAE98E76E7E924 == \F\E\A\5\D\8\A\1\1\4\C\9\4\B\8\3\B\E\B\A\E\9\8\E\7\6\E\7\E\9\2\4 ]] 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 7b4c0e79-0749-467a-a233-1af8cfae84ce 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7b4c0e790749467aa2331af8cfae84ce 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7B4C0E790749467AA2331AF8CFAE84CE 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 7B4C0E790749467AA2331AF8CFAE84CE == \7\B\4\C\0\E\7\9\0\7\4\9\4\6\7\A\A\2\3\3\1\A\F\8\C\F\A\E\8\4\C\E ]] 00:24:09.387 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3225258 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3225258 ']' 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3225258 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225258 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225258' 00:24:09.646 killing process with pid 3225258 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3225258 00:24:09.646 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3225258 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:10.212 rmmod nvme_tcp 00:24:10.212 rmmod nvme_fabrics 00:24:10.212 rmmod nvme_keyring 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3225232 ']' 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3225232 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3225232 ']' 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3225232 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225232 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225232' 00:24:10.212 killing process with pid 3225232 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3225232 00:24:10.212 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3225232 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.472 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.379 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:12.379 00:24:12.379 real 0m8.450s 00:24:12.379 user 0m8.442s 00:24:12.379 sys 0m2.695s 00:24:12.379 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.380 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:12.380 ************************************ 00:24:12.380 END TEST nvmf_nsid 00:24:12.380 ************************************ 00:24:12.380 13:23:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:12.380 00:24:12.380 real 11m36.269s 00:24:12.380 user 27m23.853s 00:24:12.380 sys 2m49.231s 00:24:12.380 13:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.380 13:23:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:12.380 ************************************ 00:24:12.380 END TEST nvmf_target_extra 00:24:12.380 ************************************ 00:24:12.380 13:23:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:12.380 13:23:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:12.380 13:23:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.380 13:23:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:12.638 ************************************ 00:24:12.638 START TEST nvmf_host 00:24:12.638 ************************************ 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:12.638 * Looking for test storage... 00:24:12.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:12.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.638 --rc genhtml_branch_coverage=1 00:24:12.638 --rc genhtml_function_coverage=1 00:24:12.638 --rc genhtml_legend=1 00:24:12.638 --rc geninfo_all_blocks=1 00:24:12.638 --rc geninfo_unexecuted_blocks=1 00:24:12.638 00:24:12.638 ' 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:12.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.638 --rc genhtml_branch_coverage=1 00:24:12.638 --rc genhtml_function_coverage=1 00:24:12.638 --rc genhtml_legend=1 00:24:12.638 --rc geninfo_all_blocks=1 00:24:12.638 --rc geninfo_unexecuted_blocks=1 00:24:12.638 00:24:12.638 ' 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:12.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.638 --rc genhtml_branch_coverage=1 00:24:12.638 --rc genhtml_function_coverage=1 00:24:12.638 --rc genhtml_legend=1 00:24:12.638 --rc geninfo_all_blocks=1 00:24:12.638 --rc geninfo_unexecuted_blocks=1 00:24:12.638 00:24:12.638 ' 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:12.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.638 --rc genhtml_branch_coverage=1 00:24:12.638 --rc genhtml_function_coverage=1 00:24:12.638 --rc genhtml_legend=1 00:24:12.638 --rc geninfo_all_blocks=1 00:24:12.638 --rc geninfo_unexecuted_blocks=1 00:24:12.638 00:24:12.638 ' 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.638 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:12.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.639 ************************************ 00:24:12.639 START TEST nvmf_multicontroller 00:24:12.639 ************************************ 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:12.639 * Looking for test storage... 00:24:12.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:12.639 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:12.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.899 --rc genhtml_branch_coverage=1 00:24:12.899 --rc genhtml_function_coverage=1 00:24:12.899 --rc genhtml_legend=1 00:24:12.899 --rc geninfo_all_blocks=1 00:24:12.899 --rc geninfo_unexecuted_blocks=1 00:24:12.899 00:24:12.899 ' 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:12.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.899 --rc genhtml_branch_coverage=1 00:24:12.899 --rc genhtml_function_coverage=1 00:24:12.899 --rc genhtml_legend=1 00:24:12.899 --rc geninfo_all_blocks=1 00:24:12.899 --rc geninfo_unexecuted_blocks=1 00:24:12.899 00:24:12.899 ' 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:12.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.899 --rc genhtml_branch_coverage=1 00:24:12.899 --rc genhtml_function_coverage=1 00:24:12.899 --rc genhtml_legend=1 00:24:12.899 --rc geninfo_all_blocks=1 00:24:12.899 --rc geninfo_unexecuted_blocks=1 00:24:12.899 00:24:12.899 ' 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:12.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.899 --rc genhtml_branch_coverage=1 00:24:12.899 --rc genhtml_function_coverage=1 00:24:12.899 --rc genhtml_legend=1 00:24:12.899 --rc geninfo_all_blocks=1 00:24:12.899 --rc geninfo_unexecuted_blocks=1 00:24:12.899 00:24:12.899 ' 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:12.899 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:12.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:12.900 13:23:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:14.806 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:14.806 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:14.806 Found net devices under 0000:09:00.0: cvl_0_0 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:14.806 Found net devices under 0000:09:00.1: cvl_0_1 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.806 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:15.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:24:15.065 00:24:15.065 --- 10.0.0.2 ping statistics --- 00:24:15.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.065 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:24:15.065 00:24:15.065 --- 10.0.0.1 ping statistics --- 00:24:15.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.065 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3227765 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3227765 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3227765 ']' 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.065 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.065 [2024-11-25 13:23:12.643197] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:15.065 [2024-11-25 13:23:12.643280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.065 [2024-11-25 13:23:12.712834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:15.323 [2024-11-25 13:23:12.770307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.323 [2024-11-25 13:23:12.770356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.323 [2024-11-25 13:23:12.770384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.323 [2024-11-25 13:23:12.770395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.323 [2024-11-25 13:23:12.770404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.323 [2024-11-25 13:23:12.771803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.323 [2024-11-25 13:23:12.771866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:15.323 [2024-11-25 13:23:12.771870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.323 [2024-11-25 13:23:12.912326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.323 Malloc0 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.323 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.324 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.324 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.324 [2024-11-25 13:23:12.977644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.581 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.581 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:15.581 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.581 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.581 [2024-11-25 13:23:12.985506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:15.581 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.581 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:15.581 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.581 13:23:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.581 Malloc1 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.581 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3227843 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3227843 /var/tmp/bdevperf.sock 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3227843 ']' 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.582 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 NVMe0n1 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.841 1 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 request: 00:24:15.841 { 00:24:15.841 "name": "NVMe0", 00:24:15.841 "trtype": "tcp", 00:24:15.841 "traddr": "10.0.0.2", 00:24:15.841 "adrfam": "ipv4", 00:24:15.841 "trsvcid": "4420", 00:24:15.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.841 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:15.841 "hostaddr": "10.0.0.1", 00:24:15.841 "prchk_reftag": false, 00:24:15.841 "prchk_guard": false, 00:24:15.841 "hdgst": false, 00:24:15.841 "ddgst": false, 00:24:15.841 "allow_unrecognized_csi": false, 00:24:15.841 "method": "bdev_nvme_attach_controller", 00:24:15.841 "req_id": 1 00:24:15.841 } 00:24:15.841 Got JSON-RPC error response 00:24:15.841 response: 00:24:15.841 { 00:24:15.841 "code": -114, 00:24:15.841 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:15.841 } 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 request: 00:24:15.841 { 00:24:15.841 "name": "NVMe0", 00:24:15.841 "trtype": "tcp", 00:24:15.841 "traddr": "10.0.0.2", 00:24:15.841 "adrfam": "ipv4", 00:24:15.841 "trsvcid": "4420", 00:24:15.841 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:15.841 "hostaddr": "10.0.0.1", 00:24:15.841 "prchk_reftag": false, 00:24:15.841 "prchk_guard": false, 00:24:15.841 "hdgst": false, 00:24:15.841 "ddgst": false, 00:24:15.841 "allow_unrecognized_csi": false, 00:24:15.841 "method": "bdev_nvme_attach_controller", 00:24:15.841 "req_id": 1 00:24:15.841 } 00:24:15.841 Got JSON-RPC error response 00:24:15.841 response: 00:24:15.841 { 00:24:15.841 "code": -114, 00:24:15.841 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:15.841 } 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.841 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.099 request: 00:24:16.099 { 00:24:16.099 "name": "NVMe0", 00:24:16.099 "trtype": "tcp", 00:24:16.099 "traddr": "10.0.0.2", 00:24:16.099 "adrfam": "ipv4", 00:24:16.099 "trsvcid": "4420", 00:24:16.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.099 "hostaddr": "10.0.0.1", 00:24:16.099 "prchk_reftag": false, 00:24:16.099 "prchk_guard": false, 00:24:16.099 "hdgst": false, 00:24:16.099 "ddgst": false, 00:24:16.099 "multipath": "disable", 00:24:16.099 "allow_unrecognized_csi": false, 00:24:16.099 "method": "bdev_nvme_attach_controller", 00:24:16.099 "req_id": 1 00:24:16.099 } 00:24:16.099 Got JSON-RPC error response 00:24:16.099 response: 00:24:16.099 { 00:24:16.099 "code": -114, 00:24:16.099 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:16.099 } 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.099 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.100 request: 00:24:16.100 { 00:24:16.100 "name": "NVMe0", 00:24:16.100 "trtype": "tcp", 00:24:16.100 "traddr": "10.0.0.2", 00:24:16.100 "adrfam": "ipv4", 00:24:16.100 "trsvcid": "4420", 00:24:16.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.100 "hostaddr": "10.0.0.1", 00:24:16.100 "prchk_reftag": false, 00:24:16.100 "prchk_guard": false, 00:24:16.100 "hdgst": false, 00:24:16.100 "ddgst": false, 00:24:16.100 "multipath": "failover", 00:24:16.100 "allow_unrecognized_csi": false, 00:24:16.100 "method": "bdev_nvme_attach_controller", 00:24:16.100 "req_id": 1 00:24:16.100 } 00:24:16.100 Got JSON-RPC error response 00:24:16.100 response: 00:24:16.100 { 00:24:16.100 "code": -114, 00:24:16.100 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:16.100 } 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.100 NVMe0n1 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.100 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.358 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:16.358 13:23:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:17.732 { 00:24:17.732 "results": [ 00:24:17.732 { 00:24:17.732 "job": "NVMe0n1", 00:24:17.732 "core_mask": "0x1", 00:24:17.732 "workload": "write", 00:24:17.732 "status": "finished", 00:24:17.732 "queue_depth": 128, 00:24:17.732 "io_size": 4096, 00:24:17.732 "runtime": 1.005169, 00:24:17.732 "iops": 17311.516769816815, 00:24:17.732 "mibps": 67.62311238209693, 00:24:17.732 "io_failed": 0, 00:24:17.732 "io_timeout": 0, 00:24:17.732 "avg_latency_us": 7382.831397599542, 00:24:17.732 "min_latency_us": 2220.9422222222224, 00:24:17.732 "max_latency_us": 13592.651851851851 00:24:17.732 } 00:24:17.732 ], 00:24:17.732 "core_count": 1 00:24:17.732 } 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3227843 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3227843 ']' 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3227843 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227843 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227843' 00:24:17.732 killing process with pid 3227843 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3227843 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3227843 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:17.732 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:17.989 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:17.989 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:17.989 [2024-11-25 13:23:13.093602] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:17.989 [2024-11-25 13:23:13.093692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227843 ] 00:24:17.989 [2024-11-25 13:23:13.161099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.989 [2024-11-25 13:23:13.220109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.989 [2024-11-25 13:23:13.937965] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 77c4fae8-c623-4616-9e86-efb7d41d21e3 already exists 00:24:17.989 [2024-11-25 13:23:13.938002] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:77c4fae8-c623-4616-9e86-efb7d41d21e3 alias for bdev NVMe1n1 00:24:17.989 [2024-11-25 13:23:13.938032] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:17.989 Running I/O for 1 seconds... 00:24:17.989 17273.00 IOPS, 67.47 MiB/s 00:24:17.989 Latency(us) 00:24:17.989 [2024-11-25T12:23:15.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.989 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:17.989 NVMe0n1 : 1.01 17311.52 67.62 0.00 0.00 7382.83 2220.94 13592.65 00:24:17.990 [2024-11-25T12:23:15.649Z] =================================================================================================================== 00:24:17.990 [2024-11-25T12:23:15.649Z] Total : 17311.52 67.62 0.00 0.00 7382.83 2220.94 13592.65 00:24:17.990 Received shutdown signal, test time was about 1.000000 seconds 00:24:17.990 00:24:17.990 Latency(us) 00:24:17.990 [2024-11-25T12:23:15.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.990 [2024-11-25T12:23:15.649Z] =================================================================================================================== 00:24:17.990 [2024-11-25T12:23:15.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.990 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.990 rmmod nvme_tcp 00:24:17.990 rmmod nvme_fabrics 00:24:17.990 rmmod nvme_keyring 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3227765 ']' 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3227765 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3227765 ']' 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3227765 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227765 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227765' 00:24:17.990 killing process with pid 3227765 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3227765 00:24:17.990 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3227765 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.248 13:23:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.153 13:23:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.153 00:24:20.153 real 0m7.576s 00:24:20.153 user 0m12.022s 00:24:20.153 sys 0m2.361s 00:24:20.153 13:23:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.153 13:23:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:20.153 ************************************ 00:24:20.153 END TEST nvmf_multicontroller 00:24:20.153 ************************************ 00:24:20.410 13:23:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:20.410 13:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:20.410 13:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.410 13:23:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.410 ************************************ 00:24:20.410 START TEST nvmf_aer 00:24:20.410 ************************************ 00:24:20.410 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:20.410 * Looking for test storage... 00:24:20.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:20.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.411 --rc genhtml_branch_coverage=1 00:24:20.411 --rc genhtml_function_coverage=1 00:24:20.411 --rc genhtml_legend=1 00:24:20.411 --rc geninfo_all_blocks=1 00:24:20.411 --rc geninfo_unexecuted_blocks=1 00:24:20.411 00:24:20.411 ' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:20.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.411 --rc genhtml_branch_coverage=1 00:24:20.411 --rc genhtml_function_coverage=1 00:24:20.411 --rc genhtml_legend=1 00:24:20.411 --rc geninfo_all_blocks=1 00:24:20.411 --rc geninfo_unexecuted_blocks=1 00:24:20.411 00:24:20.411 ' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:20.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.411 --rc genhtml_branch_coverage=1 00:24:20.411 --rc genhtml_function_coverage=1 00:24:20.411 --rc genhtml_legend=1 00:24:20.411 --rc geninfo_all_blocks=1 00:24:20.411 --rc geninfo_unexecuted_blocks=1 00:24:20.411 00:24:20.411 ' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:20.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.411 --rc genhtml_branch_coverage=1 00:24:20.411 --rc genhtml_function_coverage=1 00:24:20.411 --rc genhtml_legend=1 00:24:20.411 --rc geninfo_all_blocks=1 00:24:20.411 --rc geninfo_unexecuted_blocks=1 00:24:20.411 00:24:20.411 ' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.411 13:23:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.411 13:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:20.411 13:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:20.411 13:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.411 13:23:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.943 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:22.944 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:22.944 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:22.944 Found net devices under 0000:09:00.0: cvl_0_0 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:22.944 Found net devices under 0000:09:00.1: cvl_0_1 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:24:22.944 00:24:22.944 --- 10.0.0.2 ping statistics --- 00:24:22.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.944 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:24:22.944 00:24:22.944 --- 10.0.0.1 ping statistics --- 00:24:22.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.944 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3230060 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3230060 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3230060 ']' 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.944 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.944 [2024-11-25 13:23:20.304893] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:22.944 [2024-11-25 13:23:20.304984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.944 [2024-11-25 13:23:20.378826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.944 [2024-11-25 13:23:20.440192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.944 [2024-11-25 13:23:20.440260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.944 [2024-11-25 13:23:20.440274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.944 [2024-11-25 13:23:20.440285] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.944 [2024-11-25 13:23:20.440295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.945 [2024-11-25 13:23:20.441895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.945 [2024-11-25 13:23:20.441953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.945 [2024-11-25 13:23:20.442018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.945 [2024-11-25 13:23:20.442021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.945 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.945 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:22.945 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.945 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.945 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.203 [2024-11-25 13:23:20.614516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.203 Malloc0 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.203 [2024-11-25 13:23:20.678221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.203 [ 00:24:23.203 { 00:24:23.203 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:23.203 "subtype": "Discovery", 00:24:23.203 "listen_addresses": [], 00:24:23.203 "allow_any_host": true, 00:24:23.203 "hosts": [] 00:24:23.203 }, 00:24:23.203 { 00:24:23.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.203 "subtype": "NVMe", 00:24:23.203 "listen_addresses": [ 00:24:23.203 { 00:24:23.203 "trtype": "TCP", 00:24:23.203 "adrfam": "IPv4", 00:24:23.203 "traddr": "10.0.0.2", 00:24:23.203 "trsvcid": "4420" 00:24:23.203 } 00:24:23.203 ], 00:24:23.203 "allow_any_host": true, 00:24:23.203 "hosts": [], 00:24:23.203 "serial_number": "SPDK00000000000001", 00:24:23.203 "model_number": "SPDK bdev Controller", 00:24:23.203 "max_namespaces": 2, 00:24:23.203 "min_cntlid": 1, 00:24:23.203 "max_cntlid": 65519, 00:24:23.203 "namespaces": [ 00:24:23.203 { 00:24:23.203 "nsid": 1, 00:24:23.203 "bdev_name": "Malloc0", 00:24:23.203 "name": "Malloc0", 00:24:23.203 "nguid": "6333B1A935B64F7EACD86762021920F1", 00:24:23.203 "uuid": "6333b1a9-35b6-4f7e-acd8-6762021920f1" 00:24:23.203 } 00:24:23.203 ] 00:24:23.203 } 00:24:23.203 ] 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3230209 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:23.203 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.461 Malloc1 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.461 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.461 [ 00:24:23.461 { 00:24:23.461 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:23.461 "subtype": "Discovery", 00:24:23.461 "listen_addresses": [], 00:24:23.461 "allow_any_host": true, 00:24:23.461 "hosts": [] 00:24:23.461 }, 00:24:23.461 { 00:24:23.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.461 "subtype": "NVMe", 00:24:23.461 "listen_addresses": [ 00:24:23.461 { 00:24:23.461 "trtype": "TCP", 00:24:23.461 "adrfam": "IPv4", 00:24:23.461 "traddr": "10.0.0.2", 00:24:23.462 "trsvcid": "4420" 00:24:23.462 } 00:24:23.462 ], 00:24:23.462 "allow_any_host": true, 00:24:23.462 "hosts": [], 00:24:23.462 "serial_number": "SPDK00000000000001", 00:24:23.462 "model_number": "SPDK bdev Controller", 00:24:23.462 "max_namespaces": 2, 00:24:23.462 "min_cntlid": 1, 00:24:23.462 "max_cntlid": 65519, 00:24:23.462 "namespaces": [ 00:24:23.462 { 00:24:23.462 "nsid": 1, 00:24:23.462 "bdev_name": "Malloc0", 00:24:23.462 "name": "Malloc0", 00:24:23.462 "nguid": "6333B1A935B64F7EACD86762021920F1", 00:24:23.462 "uuid": "6333b1a9-35b6-4f7e-acd8-6762021920f1" 00:24:23.462 }, 00:24:23.462 { 00:24:23.462 "nsid": 2, 00:24:23.462 "bdev_name": "Malloc1", 00:24:23.462 "name": "Malloc1", 00:24:23.462 "nguid": "BB54B068854B4A61BF6A08EABEC23BC4", 00:24:23.462 "uuid": "bb54b068-854b-4a61-bf6a-08eabec23bc4" 00:24:23.462 } 00:24:23.462 ] 00:24:23.462 } 00:24:23.462 ] 00:24:23.462 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.462 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3230209 00:24:23.462 Asynchronous Event Request test 00:24:23.462 Attaching to 10.0.0.2 00:24:23.462 Attached to 10.0.0.2 00:24:23.462 Registering asynchronous event callbacks... 00:24:23.462 Starting namespace attribute notice tests for all controllers... 00:24:23.462 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:23.462 aer_cb - Changed Namespace 00:24:23.462 Cleaning up... 00:24:23.462 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:23.462 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.462 13:23:20 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.462 rmmod nvme_tcp 00:24:23.462 rmmod nvme_fabrics 00:24:23.462 rmmod nvme_keyring 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3230060 ']' 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3230060 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3230060 ']' 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3230060 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.462 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3230060 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3230060' 00:24:23.721 killing process with pid 3230060 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3230060 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3230060 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.721 13:23:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.258 00:24:26.258 real 0m5.537s 00:24:26.258 user 0m4.420s 00:24:26.258 sys 0m2.028s 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:26.258 ************************************ 00:24:26.258 END TEST nvmf_aer 00:24:26.258 ************************************ 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.258 ************************************ 00:24:26.258 START TEST nvmf_async_init 00:24:26.258 ************************************ 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:26.258 * Looking for test storage... 00:24:26.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:26.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.258 --rc genhtml_branch_coverage=1 00:24:26.258 --rc genhtml_function_coverage=1 00:24:26.258 --rc genhtml_legend=1 00:24:26.258 --rc geninfo_all_blocks=1 00:24:26.258 --rc geninfo_unexecuted_blocks=1 00:24:26.258 00:24:26.258 ' 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:26.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.258 --rc genhtml_branch_coverage=1 00:24:26.258 --rc genhtml_function_coverage=1 00:24:26.258 --rc genhtml_legend=1 00:24:26.258 --rc geninfo_all_blocks=1 00:24:26.258 --rc geninfo_unexecuted_blocks=1 00:24:26.258 00:24:26.258 ' 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:26.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.258 --rc genhtml_branch_coverage=1 00:24:26.258 --rc genhtml_function_coverage=1 00:24:26.258 --rc genhtml_legend=1 00:24:26.258 --rc geninfo_all_blocks=1 00:24:26.258 --rc geninfo_unexecuted_blocks=1 00:24:26.258 00:24:26.258 ' 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:26.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.258 --rc genhtml_branch_coverage=1 00:24:26.258 --rc genhtml_function_coverage=1 00:24:26.258 --rc genhtml_legend=1 00:24:26.258 --rc geninfo_all_blocks=1 00:24:26.258 --rc geninfo_unexecuted_blocks=1 00:24:26.258 00:24:26.258 ' 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:26.258 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7ad35d4d0d6e4a6eab82ccff087ae24c 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.259 13:23:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:28.164 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:28.164 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:28.164 Found net devices under 0000:09:00.0: cvl_0_0 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:28.164 Found net devices under 0000:09:00.1: cvl_0_1 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.164 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.165 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.165 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.165 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.165 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:24:28.424 00:24:28.424 --- 10.0.0.2 ping statistics --- 00:24:28.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.424 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:24:28.424 00:24:28.424 --- 10.0.0.1 ping statistics --- 00:24:28.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.424 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3232152 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3232152 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3232152 ']' 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.424 13:23:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.424 [2024-11-25 13:23:26.012856] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:28.424 [2024-11-25 13:23:26.012935] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.683 [2024-11-25 13:23:26.082574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.683 [2024-11-25 13:23:26.152704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.683 [2024-11-25 13:23:26.152756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.683 [2024-11-25 13:23:26.152788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.683 [2024-11-25 13:23:26.152803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.683 [2024-11-25 13:23:26.152815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.683 [2024-11-25 13:23:26.153551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 [2024-11-25 13:23:26.299135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 null0 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7ad35d4d0d6e4a6eab82ccff087ae24c 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.683 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.683 [2024-11-25 13:23:26.339469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.941 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.941 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:28.941 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.941 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.941 nvme0n1 00:24:28.941 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.941 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.941 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.941 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.941 [ 00:24:28.941 { 00:24:28.941 "name": "nvme0n1", 00:24:28.941 "aliases": [ 00:24:28.941 "7ad35d4d-0d6e-4a6e-ab82-ccff087ae24c" 00:24:28.941 ], 00:24:28.941 "product_name": "NVMe disk", 00:24:28.941 "block_size": 512, 00:24:28.941 "num_blocks": 2097152, 00:24:28.941 "uuid": "7ad35d4d-0d6e-4a6e-ab82-ccff087ae24c", 00:24:28.941 "numa_id": 0, 00:24:28.941 "assigned_rate_limits": { 00:24:28.941 "rw_ios_per_sec": 0, 00:24:28.941 "rw_mbytes_per_sec": 0, 00:24:28.941 "r_mbytes_per_sec": 0, 00:24:28.941 "w_mbytes_per_sec": 0 00:24:28.941 }, 00:24:28.941 "claimed": false, 00:24:28.941 "zoned": false, 00:24:28.941 "supported_io_types": { 00:24:28.941 "read": true, 00:24:28.941 "write": true, 00:24:28.941 "unmap": false, 00:24:28.941 "flush": true, 00:24:28.941 "reset": true, 00:24:28.941 "nvme_admin": true, 00:24:28.941 "nvme_io": true, 00:24:28.941 "nvme_io_md": false, 00:24:28.941 "write_zeroes": true, 00:24:28.941 "zcopy": false, 00:24:28.941 "get_zone_info": false, 00:24:28.941 "zone_management": false, 00:24:28.941 "zone_append": false, 00:24:28.941 "compare": true, 00:24:28.941 "compare_and_write": true, 00:24:28.941 "abort": true, 00:24:28.941 "seek_hole": false, 00:24:28.941 "seek_data": false, 00:24:28.941 "copy": true, 00:24:28.941 "nvme_iov_md": false 00:24:28.941 }, 00:24:28.941 "memory_domains": [ 00:24:28.941 { 00:24:28.941 "dma_device_id": "system", 00:24:28.941 "dma_device_type": 1 00:24:28.941 } 00:24:28.941 ], 00:24:28.941 "driver_specific": { 00:24:28.941 "nvme": [ 00:24:28.941 { 00:24:28.941 "trid": { 00:24:28.941 "trtype": "TCP", 00:24:28.941 "adrfam": "IPv4", 00:24:28.941 "traddr": "10.0.0.2", 00:24:28.941 "trsvcid": "4420", 00:24:28.941 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.941 }, 00:24:28.941 "ctrlr_data": { 00:24:28.942 "cntlid": 1, 00:24:28.942 "vendor_id": "0x8086", 00:24:28.942 "model_number": "SPDK bdev Controller", 00:24:28.942 "serial_number": "00000000000000000000", 00:24:28.942 "firmware_revision": "25.01", 00:24:28.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.942 "oacs": { 00:24:28.942 "security": 0, 00:24:28.942 "format": 0, 00:24:28.942 "firmware": 0, 00:24:28.942 "ns_manage": 0 00:24:28.942 }, 00:24:28.942 "multi_ctrlr": true, 00:24:28.942 "ana_reporting": false 00:24:28.942 }, 00:24:28.942 "vs": { 00:24:28.942 "nvme_version": "1.3" 00:24:28.942 }, 00:24:28.942 "ns_data": { 00:24:28.942 "id": 1, 00:24:28.942 "can_share": true 00:24:28.942 } 00:24:28.942 } 00:24:28.942 ], 00:24:28.942 "mp_policy": "active_passive" 00:24:28.942 } 00:24:28.942 } 00:24:28.942 ] 00:24:28.942 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.942 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:28.942 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.942 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:28.942 [2024-11-25 13:23:26.589215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:28.942 [2024-11-25 13:23:26.589351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121fda0 (9): Bad file descriptor 00:24:29.200 [2024-11-25 13:23:26.721439] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.200 [ 00:24:29.200 { 00:24:29.200 "name": "nvme0n1", 00:24:29.200 "aliases": [ 00:24:29.200 "7ad35d4d-0d6e-4a6e-ab82-ccff087ae24c" 00:24:29.200 ], 00:24:29.200 "product_name": "NVMe disk", 00:24:29.200 "block_size": 512, 00:24:29.200 "num_blocks": 2097152, 00:24:29.200 "uuid": "7ad35d4d-0d6e-4a6e-ab82-ccff087ae24c", 00:24:29.200 "numa_id": 0, 00:24:29.200 "assigned_rate_limits": { 00:24:29.200 "rw_ios_per_sec": 0, 00:24:29.200 "rw_mbytes_per_sec": 0, 00:24:29.200 "r_mbytes_per_sec": 0, 00:24:29.200 "w_mbytes_per_sec": 0 00:24:29.200 }, 00:24:29.200 "claimed": false, 00:24:29.200 "zoned": false, 00:24:29.200 "supported_io_types": { 00:24:29.200 "read": true, 00:24:29.200 "write": true, 00:24:29.200 "unmap": false, 00:24:29.200 "flush": true, 00:24:29.200 "reset": true, 00:24:29.200 "nvme_admin": true, 00:24:29.200 "nvme_io": true, 00:24:29.200 "nvme_io_md": false, 00:24:29.200 "write_zeroes": true, 00:24:29.200 "zcopy": false, 00:24:29.200 "get_zone_info": false, 00:24:29.200 "zone_management": false, 00:24:29.200 "zone_append": false, 00:24:29.200 "compare": true, 00:24:29.200 "compare_and_write": true, 00:24:29.200 "abort": true, 00:24:29.200 "seek_hole": false, 00:24:29.200 "seek_data": false, 00:24:29.200 "copy": true, 00:24:29.200 "nvme_iov_md": false 00:24:29.200 }, 00:24:29.200 "memory_domains": [ 00:24:29.200 { 00:24:29.200 "dma_device_id": "system", 00:24:29.200 "dma_device_type": 1 00:24:29.200 } 00:24:29.200 ], 00:24:29.200 "driver_specific": { 00:24:29.200 "nvme": [ 00:24:29.200 { 00:24:29.200 "trid": { 00:24:29.200 "trtype": "TCP", 00:24:29.200 "adrfam": "IPv4", 00:24:29.200 "traddr": "10.0.0.2", 00:24:29.200 "trsvcid": "4420", 00:24:29.200 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:29.200 }, 00:24:29.200 "ctrlr_data": { 00:24:29.200 "cntlid": 2, 00:24:29.200 "vendor_id": "0x8086", 00:24:29.200 "model_number": "SPDK bdev Controller", 00:24:29.200 "serial_number": "00000000000000000000", 00:24:29.200 "firmware_revision": "25.01", 00:24:29.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:29.200 "oacs": { 00:24:29.200 "security": 0, 00:24:29.200 "format": 0, 00:24:29.200 "firmware": 0, 00:24:29.200 "ns_manage": 0 00:24:29.200 }, 00:24:29.200 "multi_ctrlr": true, 00:24:29.200 "ana_reporting": false 00:24:29.200 }, 00:24:29.200 "vs": { 00:24:29.200 "nvme_version": "1.3" 00:24:29.200 }, 00:24:29.200 "ns_data": { 00:24:29.200 "id": 1, 00:24:29.200 "can_share": true 00:24:29.200 } 00:24:29.200 } 00:24:29.200 ], 00:24:29.200 "mp_policy": "active_passive" 00:24:29.200 } 00:24:29.200 } 00:24:29.200 ] 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XOPiy1va2W 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XOPiy1va2W 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.XOPiy1va2W 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.200 [2024-11-25 13:23:26.777847] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:29.200 [2024-11-25 13:23:26.777989] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:29.200 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.201 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.201 [2024-11-25 13:23:26.793889] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.459 nvme0n1 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.459 [ 00:24:29.459 { 00:24:29.459 "name": "nvme0n1", 00:24:29.459 "aliases": [ 00:24:29.459 "7ad35d4d-0d6e-4a6e-ab82-ccff087ae24c" 00:24:29.459 ], 00:24:29.459 "product_name": "NVMe disk", 00:24:29.459 "block_size": 512, 00:24:29.459 "num_blocks": 2097152, 00:24:29.459 "uuid": "7ad35d4d-0d6e-4a6e-ab82-ccff087ae24c", 00:24:29.459 "numa_id": 0, 00:24:29.459 "assigned_rate_limits": { 00:24:29.459 "rw_ios_per_sec": 0, 00:24:29.459 "rw_mbytes_per_sec": 0, 00:24:29.459 "r_mbytes_per_sec": 0, 00:24:29.459 "w_mbytes_per_sec": 0 00:24:29.459 }, 00:24:29.459 "claimed": false, 00:24:29.459 "zoned": false, 00:24:29.459 "supported_io_types": { 00:24:29.459 "read": true, 00:24:29.459 "write": true, 00:24:29.459 "unmap": false, 00:24:29.459 "flush": true, 00:24:29.459 "reset": true, 00:24:29.459 "nvme_admin": true, 00:24:29.459 "nvme_io": true, 00:24:29.459 "nvme_io_md": false, 00:24:29.459 "write_zeroes": true, 00:24:29.459 "zcopy": false, 00:24:29.459 "get_zone_info": false, 00:24:29.459 "zone_management": false, 00:24:29.459 "zone_append": false, 00:24:29.459 "compare": true, 00:24:29.459 "compare_and_write": true, 00:24:29.459 "abort": true, 00:24:29.459 "seek_hole": false, 00:24:29.459 "seek_data": false, 00:24:29.459 "copy": true, 00:24:29.459 "nvme_iov_md": false 00:24:29.459 }, 00:24:29.459 "memory_domains": [ 00:24:29.459 { 00:24:29.459 "dma_device_id": "system", 00:24:29.459 "dma_device_type": 1 00:24:29.459 } 00:24:29.459 ], 00:24:29.459 "driver_specific": { 00:24:29.459 "nvme": [ 00:24:29.459 { 00:24:29.459 "trid": { 00:24:29.459 "trtype": "TCP", 00:24:29.459 "adrfam": "IPv4", 00:24:29.459 "traddr": "10.0.0.2", 00:24:29.459 "trsvcid": "4421", 00:24:29.459 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:29.459 }, 00:24:29.459 "ctrlr_data": { 00:24:29.459 "cntlid": 3, 00:24:29.459 "vendor_id": "0x8086", 00:24:29.459 "model_number": "SPDK bdev Controller", 00:24:29.459 "serial_number": "00000000000000000000", 00:24:29.459 "firmware_revision": "25.01", 00:24:29.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:29.459 "oacs": { 00:24:29.459 "security": 0, 00:24:29.459 "format": 0, 00:24:29.459 "firmware": 0, 00:24:29.459 "ns_manage": 0 00:24:29.459 }, 00:24:29.459 "multi_ctrlr": true, 00:24:29.459 "ana_reporting": false 00:24:29.459 }, 00:24:29.459 "vs": { 00:24:29.459 "nvme_version": "1.3" 00:24:29.459 }, 00:24:29.459 "ns_data": { 00:24:29.459 "id": 1, 00:24:29.459 "can_share": true 00:24:29.459 } 00:24:29.459 } 00:24:29.459 ], 00:24:29.459 "mp_policy": "active_passive" 00:24:29.459 } 00:24:29.459 } 00:24:29.459 ] 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.XOPiy1va2W 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.459 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.460 rmmod nvme_tcp 00:24:29.460 rmmod nvme_fabrics 00:24:29.460 rmmod nvme_keyring 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3232152 ']' 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3232152 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3232152 ']' 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3232152 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.460 13:23:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3232152 00:24:29.460 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.460 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.460 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3232152' 00:24:29.460 killing process with pid 3232152 00:24:29.460 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3232152 00:24:29.460 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3232152 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.720 13:23:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.675 13:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.675 00:24:31.675 real 0m5.816s 00:24:31.675 user 0m2.283s 00:24:31.675 sys 0m1.994s 00:24:31.675 13:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.675 13:23:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:31.675 ************************************ 00:24:31.675 END TEST nvmf_async_init 00:24:31.675 ************************************ 00:24:31.675 13:23:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:31.675 13:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.676 13:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.676 13:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.676 ************************************ 00:24:31.676 START TEST dma 00:24:31.676 ************************************ 00:24:31.676 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:31.944 * Looking for test storage... 00:24:31.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.944 --rc genhtml_branch_coverage=1 00:24:31.944 --rc genhtml_function_coverage=1 00:24:31.944 --rc genhtml_legend=1 00:24:31.944 --rc geninfo_all_blocks=1 00:24:31.944 --rc geninfo_unexecuted_blocks=1 00:24:31.944 00:24:31.944 ' 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.944 --rc genhtml_branch_coverage=1 00:24:31.944 --rc genhtml_function_coverage=1 00:24:31.944 --rc genhtml_legend=1 00:24:31.944 --rc geninfo_all_blocks=1 00:24:31.944 --rc geninfo_unexecuted_blocks=1 00:24:31.944 00:24:31.944 ' 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.944 --rc genhtml_branch_coverage=1 00:24:31.944 --rc genhtml_function_coverage=1 00:24:31.944 --rc genhtml_legend=1 00:24:31.944 --rc geninfo_all_blocks=1 00:24:31.944 --rc geninfo_unexecuted_blocks=1 00:24:31.944 00:24:31.944 ' 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.944 --rc genhtml_branch_coverage=1 00:24:31.944 --rc genhtml_function_coverage=1 00:24:31.944 --rc genhtml_legend=1 00:24:31.944 --rc geninfo_all_blocks=1 00:24:31.944 --rc geninfo_unexecuted_blocks=1 00:24:31.944 00:24:31.944 ' 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.944 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:31.945 00:24:31.945 real 0m0.170s 00:24:31.945 user 0m0.119s 00:24:31.945 sys 0m0.061s 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:31.945 ************************************ 00:24:31.945 END TEST dma 00:24:31.945 ************************************ 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.945 ************************************ 00:24:31.945 START TEST nvmf_identify 00:24:31.945 ************************************ 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:31.945 * Looking for test storage... 00:24:31.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.945 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.204 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:32.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.204 --rc genhtml_branch_coverage=1 00:24:32.205 --rc genhtml_function_coverage=1 00:24:32.205 --rc genhtml_legend=1 00:24:32.205 --rc geninfo_all_blocks=1 00:24:32.205 --rc geninfo_unexecuted_blocks=1 00:24:32.205 00:24:32.205 ' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:32.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.205 --rc genhtml_branch_coverage=1 00:24:32.205 --rc genhtml_function_coverage=1 00:24:32.205 --rc genhtml_legend=1 00:24:32.205 --rc geninfo_all_blocks=1 00:24:32.205 --rc geninfo_unexecuted_blocks=1 00:24:32.205 00:24:32.205 ' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:32.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.205 --rc genhtml_branch_coverage=1 00:24:32.205 --rc genhtml_function_coverage=1 00:24:32.205 --rc genhtml_legend=1 00:24:32.205 --rc geninfo_all_blocks=1 00:24:32.205 --rc geninfo_unexecuted_blocks=1 00:24:32.205 00:24:32.205 ' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:32.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.205 --rc genhtml_branch_coverage=1 00:24:32.205 --rc genhtml_function_coverage=1 00:24:32.205 --rc genhtml_legend=1 00:24:32.205 --rc geninfo_all_blocks=1 00:24:32.205 --rc geninfo_unexecuted_blocks=1 00:24:32.205 00:24:32.205 ' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.205 13:23:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:34.738 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:34.738 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.738 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:34.738 Found net devices under 0000:09:00.0: cvl_0_0 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:34.739 Found net devices under 0000:09:00.1: cvl_0_1 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:34.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:24:34.739 00:24:34.739 --- 10.0.0.2 ping statistics --- 00:24:34.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.739 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:24:34.739 00:24:34.739 --- 10.0.0.1 ping statistics --- 00:24:34.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.739 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3234303 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3234303 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3234303 ']' 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.739 13:23:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.739 [2024-11-25 13:23:32.002862] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:34.739 [2024-11-25 13:23:32.002941] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.739 [2024-11-25 13:23:32.081153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.739 [2024-11-25 13:23:32.141880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.739 [2024-11-25 13:23:32.141930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.739 [2024-11-25 13:23:32.141958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.739 [2024-11-25 13:23:32.141968] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.739 [2024-11-25 13:23:32.141977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.739 [2024-11-25 13:23:32.143658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.739 [2024-11-25 13:23:32.143715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.739 [2024-11-25 13:23:32.143789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.739 [2024-11-25 13:23:32.143793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.739 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.739 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:34.739 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.739 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 [2024-11-25 13:23:32.261749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 Malloc0 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 [2024-11-25 13:23:32.340505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:34.740 [ 00:24:34.740 { 00:24:34.740 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:34.740 "subtype": "Discovery", 00:24:34.740 "listen_addresses": [ 00:24:34.740 { 00:24:34.740 "trtype": "TCP", 00:24:34.740 "adrfam": "IPv4", 00:24:34.740 "traddr": "10.0.0.2", 00:24:34.740 "trsvcid": "4420" 00:24:34.740 } 00:24:34.740 ], 00:24:34.740 "allow_any_host": true, 00:24:34.740 "hosts": [] 00:24:34.740 }, 00:24:34.740 { 00:24:34.740 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.740 "subtype": "NVMe", 00:24:34.740 "listen_addresses": [ 00:24:34.740 { 00:24:34.740 "trtype": "TCP", 00:24:34.740 "adrfam": "IPv4", 00:24:34.740 "traddr": "10.0.0.2", 00:24:34.740 "trsvcid": "4420" 00:24:34.740 } 00:24:34.740 ], 00:24:34.740 "allow_any_host": true, 00:24:34.740 "hosts": [], 00:24:34.740 "serial_number": "SPDK00000000000001", 00:24:34.740 "model_number": "SPDK bdev Controller", 00:24:34.740 "max_namespaces": 32, 00:24:34.740 "min_cntlid": 1, 00:24:34.740 "max_cntlid": 65519, 00:24:34.740 "namespaces": [ 00:24:34.740 { 00:24:34.740 "nsid": 1, 00:24:34.740 "bdev_name": "Malloc0", 00:24:34.740 "name": "Malloc0", 00:24:34.740 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:34.740 "eui64": "ABCDEF0123456789", 00:24:34.740 "uuid": "352e0278-6941-465d-81cd-d86c23f9ab2c" 00:24:34.740 } 00:24:34.740 ] 00:24:34.740 } 00:24:34.740 ] 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.740 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:34.740 [2024-11-25 13:23:32.378708] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:34.740 [2024-11-25 13:23:32.378746] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234441 ] 00:24:35.004 [2024-11-25 13:23:32.425749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:35.004 [2024-11-25 13:23:32.425815] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:35.004 [2024-11-25 13:23:32.425826] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:35.004 [2024-11-25 13:23:32.425840] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:35.004 [2024-11-25 13:23:32.425856] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:35.004 [2024-11-25 13:23:32.433723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:35.004 [2024-11-25 13:23:32.433788] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x241b690 0 00:24:35.004 [2024-11-25 13:23:32.441320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:35.004 [2024-11-25 13:23:32.441343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:35.004 [2024-11-25 13:23:32.441362] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:35.004 [2024-11-25 13:23:32.441368] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:35.004 [2024-11-25 13:23:32.441416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.441430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.441437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.004 [2024-11-25 13:23:32.441454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:35.004 [2024-11-25 13:23:32.441480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.004 [2024-11-25 13:23:32.451316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.004 [2024-11-25 13:23:32.451336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.004 [2024-11-25 13:23:32.451344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.451360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.004 [2024-11-25 13:23:32.451376] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:35.004 [2024-11-25 13:23:32.451390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:35.004 [2024-11-25 13:23:32.451400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:35.004 [2024-11-25 13:23:32.451421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.451431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.451437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.004 [2024-11-25 13:23:32.451449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.004 [2024-11-25 13:23:32.451473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.004 [2024-11-25 13:23:32.451584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.004 [2024-11-25 13:23:32.451600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.004 [2024-11-25 13:23:32.451607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.451614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.004 [2024-11-25 13:23:32.451624] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:35.004 [2024-11-25 13:23:32.451641] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:35.004 [2024-11-25 13:23:32.451654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.451662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.451668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.004 [2024-11-25 13:23:32.451682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.004 [2024-11-25 13:23:32.451707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.004 [2024-11-25 13:23:32.451787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.004 [2024-11-25 13:23:32.451803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.004 [2024-11-25 13:23:32.451810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.004 [2024-11-25 13:23:32.451817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.005 [2024-11-25 13:23:32.451826] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:35.005 [2024-11-25 13:23:32.451841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:35.005 [2024-11-25 13:23:32.451857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.451865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.451872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.005 [2024-11-25 13:23:32.451882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.005 [2024-11-25 13:23:32.451905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.005 [2024-11-25 13:23:32.451986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.005 [2024-11-25 13:23:32.452001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.005 [2024-11-25 13:23:32.452007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.005 [2024-11-25 13:23:32.452023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:35.005 [2024-11-25 13:23:32.452041] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.005 [2024-11-25 13:23:32.452069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.005 [2024-11-25 13:23:32.452092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.005 [2024-11-25 13:23:32.452172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.005 [2024-11-25 13:23:32.452187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.005 [2024-11-25 13:23:32.452194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.005 [2024-11-25 13:23:32.452209] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:35.005 [2024-11-25 13:23:32.452221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:35.005 [2024-11-25 13:23:32.452235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:35.005 [2024-11-25 13:23:32.452347] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:35.005 [2024-11-25 13:23:32.452359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:35.005 [2024-11-25 13:23:32.452374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.005 [2024-11-25 13:23:32.452404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.005 [2024-11-25 13:23:32.452441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.005 [2024-11-25 13:23:32.452552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.005 [2024-11-25 13:23:32.452567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.005 [2024-11-25 13:23:32.452574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.005 [2024-11-25 13:23:32.452589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:35.005 [2024-11-25 13:23:32.452610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.005 [2024-11-25 13:23:32.452636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.005 [2024-11-25 13:23:32.452662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.005 [2024-11-25 13:23:32.452748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.005 [2024-11-25 13:23:32.452763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.005 [2024-11-25 13:23:32.452770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.005 [2024-11-25 13:23:32.452784] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:35.005 [2024-11-25 13:23:32.452796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:35.005 [2024-11-25 13:23:32.452811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:35.005 [2024-11-25 13:23:32.452826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:35.005 [2024-11-25 13:23:32.452843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.452853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.005 [2024-11-25 13:23:32.452864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.005 [2024-11-25 13:23:32.452887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.005 [2024-11-25 13:23:32.453033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.005 [2024-11-25 13:23:32.453054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.005 [2024-11-25 13:23:32.453067] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.453089] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x241b690): datao=0, datal=4096, cccid=0 00:24:35.005 [2024-11-25 13:23:32.453097] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247d100) on tqpair(0x241b690): expected_datao=0, payload_size=4096 00:24:35.005 [2024-11-25 13:23:32.453105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.453125] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.453140] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.453173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.005 [2024-11-25 13:23:32.453184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.005 [2024-11-25 13:23:32.453190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.453197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.005 [2024-11-25 13:23:32.453209] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:35.005 [2024-11-25 13:23:32.453223] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:35.005 [2024-11-25 13:23:32.453231] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:35.005 [2024-11-25 13:23:32.453240] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:35.005 [2024-11-25 13:23:32.453247] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:35.005 [2024-11-25 13:23:32.453255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:35.005 [2024-11-25 13:23:32.453270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:35.005 [2024-11-25 13:23:32.453299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.453316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.005 [2024-11-25 13:23:32.453323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.005 [2024-11-25 13:23:32.453334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:35.005 [2024-11-25 13:23:32.453358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.006 [2024-11-25 13:23:32.453456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.006 [2024-11-25 13:23:32.453471] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.006 [2024-11-25 13:23:32.453478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.006 [2024-11-25 13:23:32.453496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x241b690) 00:24:35.006 [2024-11-25 13:23:32.453525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.006 [2024-11-25 13:23:32.453535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x241b690) 00:24:35.006 [2024-11-25 13:23:32.453557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.006 [2024-11-25 13:23:32.453567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x241b690) 00:24:35.006 [2024-11-25 13:23:32.453589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.006 [2024-11-25 13:23:32.453598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.006 [2024-11-25 13:23:32.453625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.006 [2024-11-25 13:23:32.453637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:35.006 [2024-11-25 13:23:32.453673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:35.006 [2024-11-25 13:23:32.453688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x241b690) 00:24:35.006 [2024-11-25 13:23:32.453705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.006 [2024-11-25 13:23:32.453728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d100, cid 0, qid 0 00:24:35.006 [2024-11-25 13:23:32.453758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d280, cid 1, qid 0 00:24:35.006 [2024-11-25 13:23:32.453769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d400, cid 2, qid 0 00:24:35.006 [2024-11-25 13:23:32.453776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.006 [2024-11-25 13:23:32.453783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d700, cid 4, qid 0 00:24:35.006 [2024-11-25 13:23:32.453919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.006 [2024-11-25 13:23:32.453934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.006 [2024-11-25 13:23:32.453941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d700) on tqpair=0x241b690 00:24:35.006 [2024-11-25 13:23:32.453957] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:35.006 [2024-11-25 13:23:32.453966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:35.006 [2024-11-25 13:23:32.453986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.453997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x241b690) 00:24:35.006 [2024-11-25 13:23:32.454008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.006 [2024-11-25 13:23:32.454029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d700, cid 4, qid 0 00:24:35.006 [2024-11-25 13:23:32.454133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.006 [2024-11-25 13:23:32.454153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.006 [2024-11-25 13:23:32.454161] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454170] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x241b690): datao=0, datal=4096, cccid=4 00:24:35.006 [2024-11-25 13:23:32.454181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247d700) on tqpair(0x241b690): expected_datao=0, payload_size=4096 00:24:35.006 [2024-11-25 13:23:32.454189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454200] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454207] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.006 [2024-11-25 13:23:32.454228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.006 [2024-11-25 13:23:32.454235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d700) on tqpair=0x241b690 00:24:35.006 [2024-11-25 13:23:32.454267] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:35.006 [2024-11-25 13:23:32.454312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x241b690) 00:24:35.006 [2024-11-25 13:23:32.454335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.006 [2024-11-25 13:23:32.454357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454365] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x241b690) 00:24:35.006 [2024-11-25 13:23:32.454380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.006 [2024-11-25 13:23:32.454407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d700, cid 4, qid 0 00:24:35.006 [2024-11-25 13:23:32.454425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d880, cid 5, qid 0 00:24:35.006 [2024-11-25 13:23:32.454565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.006 [2024-11-25 13:23:32.454580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.006 [2024-11-25 13:23:32.454587] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454594] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x241b690): datao=0, datal=1024, cccid=4 00:24:35.006 [2024-11-25 13:23:32.454602] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247d700) on tqpair(0x241b690): expected_datao=0, payload_size=1024 00:24:35.006 [2024-11-25 13:23:32.454612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454622] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454629] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.006 [2024-11-25 13:23:32.454646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.006 [2024-11-25 13:23:32.454652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.454659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d880) on tqpair=0x241b690 00:24:35.006 [2024-11-25 13:23:32.495397] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.006 [2024-11-25 13:23:32.495421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.006 [2024-11-25 13:23:32.495430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.495437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d700) on tqpair=0x241b690 00:24:35.006 [2024-11-25 13:23:32.495456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.006 [2024-11-25 13:23:32.495465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x241b690) 00:24:35.007 [2024-11-25 13:23:32.495477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.007 [2024-11-25 13:23:32.495511] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d700, cid 4, qid 0 00:24:35.007 [2024-11-25 13:23:32.495625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.007 [2024-11-25 13:23:32.495645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.007 [2024-11-25 13:23:32.495654] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495664] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x241b690): datao=0, datal=3072, cccid=4 00:24:35.007 [2024-11-25 13:23:32.495677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247d700) on tqpair(0x241b690): expected_datao=0, payload_size=3072 00:24:35.007 [2024-11-25 13:23:32.495690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495701] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495709] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.007 [2024-11-25 13:23:32.495731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.007 [2024-11-25 13:23:32.495737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d700) on tqpair=0x241b690 00:24:35.007 [2024-11-25 13:23:32.495760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x241b690) 00:24:35.007 [2024-11-25 13:23:32.495780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.007 [2024-11-25 13:23:32.495812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d700, cid 4, qid 0 00:24:35.007 [2024-11-25 13:23:32.495915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.007 [2024-11-25 13:23:32.495930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.007 [2024-11-25 13:23:32.495937] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495944] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x241b690): datao=0, datal=8, cccid=4 00:24:35.007 [2024-11-25 13:23:32.495951] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x247d700) on tqpair(0x241b690): expected_datao=0, payload_size=8 00:24:35.007 [2024-11-25 13:23:32.495962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495973] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.495980] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.536396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.007 [2024-11-25 13:23:32.536421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.007 [2024-11-25 13:23:32.536430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.007 [2024-11-25 13:23:32.536437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d700) on tqpair=0x241b690 00:24:35.007 ===================================================== 00:24:35.007 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:35.007 ===================================================== 00:24:35.007 Controller Capabilities/Features 00:24:35.007 ================================ 00:24:35.007 Vendor ID: 0000 00:24:35.007 Subsystem Vendor ID: 0000 00:24:35.007 Serial Number: .................... 00:24:35.007 Model Number: ........................................ 00:24:35.007 Firmware Version: 25.01 00:24:35.007 Recommended Arb Burst: 0 00:24:35.007 IEEE OUI Identifier: 00 00 00 00:24:35.007 Multi-path I/O 00:24:35.007 May have multiple subsystem ports: No 00:24:35.007 May have multiple controllers: No 00:24:35.007 Associated with SR-IOV VF: No 00:24:35.007 Max Data Transfer Size: 131072 00:24:35.007 Max Number of Namespaces: 0 00:24:35.007 Max Number of I/O Queues: 1024 00:24:35.007 NVMe Specification Version (VS): 1.3 00:24:35.007 NVMe Specification Version (Identify): 1.3 00:24:35.007 Maximum Queue Entries: 128 00:24:35.007 Contiguous Queues Required: Yes 00:24:35.007 Arbitration Mechanisms Supported 00:24:35.007 Weighted Round Robin: Not Supported 00:24:35.007 Vendor Specific: Not Supported 00:24:35.007 Reset Timeout: 15000 ms 00:24:35.007 Doorbell Stride: 4 bytes 00:24:35.007 NVM Subsystem Reset: Not Supported 00:24:35.007 Command Sets Supported 00:24:35.007 NVM Command Set: Supported 00:24:35.007 Boot Partition: Not Supported 00:24:35.007 Memory Page Size Minimum: 4096 bytes 00:24:35.007 Memory Page Size Maximum: 4096 bytes 00:24:35.007 Persistent Memory Region: Not Supported 00:24:35.007 Optional Asynchronous Events Supported 00:24:35.007 Namespace Attribute Notices: Not Supported 00:24:35.007 Firmware Activation Notices: Not Supported 00:24:35.007 ANA Change Notices: Not Supported 00:24:35.007 PLE Aggregate Log Change Notices: Not Supported 00:24:35.007 LBA Status Info Alert Notices: Not Supported 00:24:35.007 EGE Aggregate Log Change Notices: Not Supported 00:24:35.007 Normal NVM Subsystem Shutdown event: Not Supported 00:24:35.007 Zone Descriptor Change Notices: Not Supported 00:24:35.007 Discovery Log Change Notices: Supported 00:24:35.007 Controller Attributes 00:24:35.007 128-bit Host Identifier: Not Supported 00:24:35.007 Non-Operational Permissive Mode: Not Supported 00:24:35.007 NVM Sets: Not Supported 00:24:35.007 Read Recovery Levels: Not Supported 00:24:35.007 Endurance Groups: Not Supported 00:24:35.007 Predictable Latency Mode: Not Supported 00:24:35.007 Traffic Based Keep ALive: Not Supported 00:24:35.007 Namespace Granularity: Not Supported 00:24:35.007 SQ Associations: Not Supported 00:24:35.007 UUID List: Not Supported 00:24:35.007 Multi-Domain Subsystem: Not Supported 00:24:35.007 Fixed Capacity Management: Not Supported 00:24:35.007 Variable Capacity Management: Not Supported 00:24:35.007 Delete Endurance Group: Not Supported 00:24:35.007 Delete NVM Set: Not Supported 00:24:35.007 Extended LBA Formats Supported: Not Supported 00:24:35.007 Flexible Data Placement Supported: Not Supported 00:24:35.007 00:24:35.007 Controller Memory Buffer Support 00:24:35.007 ================================ 00:24:35.007 Supported: No 00:24:35.007 00:24:35.007 Persistent Memory Region Support 00:24:35.007 ================================ 00:24:35.007 Supported: No 00:24:35.007 00:24:35.007 Admin Command Set Attributes 00:24:35.007 ============================ 00:24:35.007 Security Send/Receive: Not Supported 00:24:35.007 Format NVM: Not Supported 00:24:35.007 Firmware Activate/Download: Not Supported 00:24:35.007 Namespace Management: Not Supported 00:24:35.007 Device Self-Test: Not Supported 00:24:35.007 Directives: Not Supported 00:24:35.007 NVMe-MI: Not Supported 00:24:35.007 Virtualization Management: Not Supported 00:24:35.007 Doorbell Buffer Config: Not Supported 00:24:35.007 Get LBA Status Capability: Not Supported 00:24:35.007 Command & Feature Lockdown Capability: Not Supported 00:24:35.007 Abort Command Limit: 1 00:24:35.007 Async Event Request Limit: 4 00:24:35.007 Number of Firmware Slots: N/A 00:24:35.007 Firmware Slot 1 Read-Only: N/A 00:24:35.007 Firmware Activation Without Reset: N/A 00:24:35.007 Multiple Update Detection Support: N/A 00:24:35.007 Firmware Update Granularity: No Information Provided 00:24:35.007 Per-Namespace SMART Log: No 00:24:35.007 Asymmetric Namespace Access Log Page: Not Supported 00:24:35.007 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:35.007 Command Effects Log Page: Not Supported 00:24:35.007 Get Log Page Extended Data: Supported 00:24:35.007 Telemetry Log Pages: Not Supported 00:24:35.008 Persistent Event Log Pages: Not Supported 00:24:35.008 Supported Log Pages Log Page: May Support 00:24:35.008 Commands Supported & Effects Log Page: Not Supported 00:24:35.008 Feature Identifiers & Effects Log Page:May Support 00:24:35.008 NVMe-MI Commands & Effects Log Page: May Support 00:24:35.008 Data Area 4 for Telemetry Log: Not Supported 00:24:35.008 Error Log Page Entries Supported: 128 00:24:35.008 Keep Alive: Not Supported 00:24:35.008 00:24:35.008 NVM Command Set Attributes 00:24:35.008 ========================== 00:24:35.008 Submission Queue Entry Size 00:24:35.008 Max: 1 00:24:35.008 Min: 1 00:24:35.008 Completion Queue Entry Size 00:24:35.008 Max: 1 00:24:35.008 Min: 1 00:24:35.008 Number of Namespaces: 0 00:24:35.008 Compare Command: Not Supported 00:24:35.008 Write Uncorrectable Command: Not Supported 00:24:35.008 Dataset Management Command: Not Supported 00:24:35.008 Write Zeroes Command: Not Supported 00:24:35.008 Set Features Save Field: Not Supported 00:24:35.008 Reservations: Not Supported 00:24:35.008 Timestamp: Not Supported 00:24:35.008 Copy: Not Supported 00:24:35.008 Volatile Write Cache: Not Present 00:24:35.008 Atomic Write Unit (Normal): 1 00:24:35.008 Atomic Write Unit (PFail): 1 00:24:35.008 Atomic Compare & Write Unit: 1 00:24:35.008 Fused Compare & Write: Supported 00:24:35.008 Scatter-Gather List 00:24:35.008 SGL Command Set: Supported 00:24:35.008 SGL Keyed: Supported 00:24:35.008 SGL Bit Bucket Descriptor: Not Supported 00:24:35.008 SGL Metadata Pointer: Not Supported 00:24:35.008 Oversized SGL: Not Supported 00:24:35.008 SGL Metadata Address: Not Supported 00:24:35.008 SGL Offset: Supported 00:24:35.008 Transport SGL Data Block: Not Supported 00:24:35.008 Replay Protected Memory Block: Not Supported 00:24:35.008 00:24:35.008 Firmware Slot Information 00:24:35.008 ========================= 00:24:35.008 Active slot: 0 00:24:35.008 00:24:35.008 00:24:35.008 Error Log 00:24:35.008 ========= 00:24:35.008 00:24:35.008 Active Namespaces 00:24:35.008 ================= 00:24:35.008 Discovery Log Page 00:24:35.008 ================== 00:24:35.008 Generation Counter: 2 00:24:35.008 Number of Records: 2 00:24:35.008 Record Format: 0 00:24:35.008 00:24:35.008 Discovery Log Entry 0 00:24:35.008 ---------------------- 00:24:35.008 Transport Type: 3 (TCP) 00:24:35.008 Address Family: 1 (IPv4) 00:24:35.008 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:35.008 Entry Flags: 00:24:35.008 Duplicate Returned Information: 1 00:24:35.008 Explicit Persistent Connection Support for Discovery: 1 00:24:35.008 Transport Requirements: 00:24:35.008 Secure Channel: Not Required 00:24:35.008 Port ID: 0 (0x0000) 00:24:35.008 Controller ID: 65535 (0xffff) 00:24:35.008 Admin Max SQ Size: 128 00:24:35.008 Transport Service Identifier: 4420 00:24:35.008 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:35.008 Transport Address: 10.0.0.2 00:24:35.008 Discovery Log Entry 1 00:24:35.008 ---------------------- 00:24:35.008 Transport Type: 3 (TCP) 00:24:35.008 Address Family: 1 (IPv4) 00:24:35.008 Subsystem Type: 2 (NVM Subsystem) 00:24:35.008 Entry Flags: 00:24:35.008 Duplicate Returned Information: 0 00:24:35.008 Explicit Persistent Connection Support for Discovery: 0 00:24:35.008 Transport Requirements: 00:24:35.008 Secure Channel: Not Required 00:24:35.008 Port ID: 0 (0x0000) 00:24:35.008 Controller ID: 65535 (0xffff) 00:24:35.008 Admin Max SQ Size: 128 00:24:35.008 Transport Service Identifier: 4420 00:24:35.008 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:35.008 Transport Address: 10.0.0.2 [2024-11-25 13:23:32.536552] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:35.008 [2024-11-25 13:23:32.536576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d100) on tqpair=0x241b690 00:24:35.008 [2024-11-25 13:23:32.536592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.008 [2024-11-25 13:23:32.536602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d280) on tqpair=0x241b690 00:24:35.008 [2024-11-25 13:23:32.536609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.008 [2024-11-25 13:23:32.536617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d400) on tqpair=0x241b690 00:24:35.008 [2024-11-25 13:23:32.536625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.008 [2024-11-25 13:23:32.536633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.008 [2024-11-25 13:23:32.536640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.008 [2024-11-25 13:23:32.536659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.536668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.536680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.008 [2024-11-25 13:23:32.536693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.008 [2024-11-25 13:23:32.536734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.008 [2024-11-25 13:23:32.536825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.008 [2024-11-25 13:23:32.536840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.008 [2024-11-25 13:23:32.536847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.536857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.008 [2024-11-25 13:23:32.536870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.536878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.536885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.008 [2024-11-25 13:23:32.536896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.008 [2024-11-25 13:23:32.536925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.008 [2024-11-25 13:23:32.537022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.008 [2024-11-25 13:23:32.537037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.008 [2024-11-25 13:23:32.537043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.537050] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.008 [2024-11-25 13:23:32.537059] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:35.008 [2024-11-25 13:23:32.537069] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:35.008 [2024-11-25 13:23:32.537087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.537096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.537103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.008 [2024-11-25 13:23:32.537115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.008 [2024-11-25 13:23:32.537139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.008 [2024-11-25 13:23:32.537227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.008 [2024-11-25 13:23:32.537242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.008 [2024-11-25 13:23:32.537248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.008 [2024-11-25 13:23:32.537255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.537274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.537314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.537341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.537424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.537439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.537446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.537477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.537506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.537531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.537612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.537626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.537633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.537658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.537686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.537710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.537786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.537800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.537807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.537833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.537861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.537884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.537968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.537983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.537990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.537997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.538015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.538043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.538066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.538144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.538159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.538166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.538191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538214] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.538224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.538246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.538343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.538359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.538366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.538391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.538419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.538441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.538529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.538543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.538550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.538575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.538603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.538625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.538707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.538722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.538729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.538754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.538782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.538804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.538884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.538899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.538905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.538930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.538948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.009 [2024-11-25 13:23:32.538963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.009 [2024-11-25 13:23:32.538986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.009 [2024-11-25 13:23:32.539064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.009 [2024-11-25 13:23:32.539080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.009 [2024-11-25 13:23:32.539087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.009 [2024-11-25 13:23:32.539094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.009 [2024-11-25 13:23:32.539112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.539123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.539129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.010 [2024-11-25 13:23:32.539140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.010 [2024-11-25 13:23:32.539163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.010 [2024-11-25 13:23:32.539245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.010 [2024-11-25 13:23:32.539259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.010 [2024-11-25 13:23:32.539266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.539273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.010 [2024-11-25 13:23:32.539291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.543310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.543323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x241b690) 00:24:35.010 [2024-11-25 13:23:32.543335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.010 [2024-11-25 13:23:32.543359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x247d580, cid 3, qid 0 00:24:35.010 [2024-11-25 13:23:32.543461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.010 [2024-11-25 13:23:32.543476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.010 [2024-11-25 13:23:32.543483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.543490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x247d580) on tqpair=0x241b690 00:24:35.010 [2024-11-25 13:23:32.543504] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:24:35.010 00:24:35.010 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:35.010 [2024-11-25 13:23:32.580135] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:35.010 [2024-11-25 13:23:32.580180] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234444 ] 00:24:35.010 [2024-11-25 13:23:32.631030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:35.010 [2024-11-25 13:23:32.631088] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:35.010 [2024-11-25 13:23:32.631102] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:35.010 [2024-11-25 13:23:32.631117] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:35.010 [2024-11-25 13:23:32.631129] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:35.010 [2024-11-25 13:23:32.631590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:35.010 [2024-11-25 13:23:32.631632] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2477690 0 00:24:35.010 [2024-11-25 13:23:32.642329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:35.010 [2024-11-25 13:23:32.642350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:35.010 [2024-11-25 13:23:32.642358] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:35.010 [2024-11-25 13:23:32.642364] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:35.010 [2024-11-25 13:23:32.642414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.642427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.642434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.010 [2024-11-25 13:23:32.642448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:35.010 [2024-11-25 13:23:32.642476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.010 [2024-11-25 13:23:32.649315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.010 [2024-11-25 13:23:32.649333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.010 [2024-11-25 13:23:32.649341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.649348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.010 [2024-11-25 13:23:32.649381] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:35.010 [2024-11-25 13:23:32.649394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:35.010 [2024-11-25 13:23:32.649404] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:35.010 [2024-11-25 13:23:32.649423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.649432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.649439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.010 [2024-11-25 13:23:32.649451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.010 [2024-11-25 13:23:32.649476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.010 [2024-11-25 13:23:32.649608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.010 [2024-11-25 13:23:32.649622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.010 [2024-11-25 13:23:32.649629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.649636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.010 [2024-11-25 13:23:32.649645] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:35.010 [2024-11-25 13:23:32.649659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:35.010 [2024-11-25 13:23:32.649671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.649679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.010 [2024-11-25 13:23:32.649686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.010 [2024-11-25 13:23:32.649697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.010 [2024-11-25 13:23:32.649723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.011 [2024-11-25 13:23:32.649815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.011 [2024-11-25 13:23:32.649829] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.011 [2024-11-25 13:23:32.649836] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.649843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.011 [2024-11-25 13:23:32.649852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:35.011 [2024-11-25 13:23:32.649866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:35.011 [2024-11-25 13:23:32.649879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.649887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.649894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.011 [2024-11-25 13:23:32.649904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.011 [2024-11-25 13:23:32.649926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.011 [2024-11-25 13:23:32.650063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.011 [2024-11-25 13:23:32.650075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.011 [2024-11-25 13:23:32.650082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.011 [2024-11-25 13:23:32.650098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:35.011 [2024-11-25 13:23:32.650114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.011 [2024-11-25 13:23:32.650141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.011 [2024-11-25 13:23:32.650162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.011 [2024-11-25 13:23:32.650290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.011 [2024-11-25 13:23:32.650310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.011 [2024-11-25 13:23:32.650319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.011 [2024-11-25 13:23:32.650334] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:35.011 [2024-11-25 13:23:32.650343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:35.011 [2024-11-25 13:23:32.650357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:35.011 [2024-11-25 13:23:32.650467] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:35.011 [2024-11-25 13:23:32.650475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:35.011 [2024-11-25 13:23:32.650487] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.011 [2024-11-25 13:23:32.650518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.011 [2024-11-25 13:23:32.650541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.011 [2024-11-25 13:23:32.650667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.011 [2024-11-25 13:23:32.650679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.011 [2024-11-25 13:23:32.650686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.011 [2024-11-25 13:23:32.650701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:35.011 [2024-11-25 13:23:32.650718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.011 [2024-11-25 13:23:32.650745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.011 [2024-11-25 13:23:32.650766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.011 [2024-11-25 13:23:32.650854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.011 [2024-11-25 13:23:32.650868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.011 [2024-11-25 13:23:32.650875] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.011 [2024-11-25 13:23:32.650889] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:35.011 [2024-11-25 13:23:32.650898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:35.011 [2024-11-25 13:23:32.650912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:35.011 [2024-11-25 13:23:32.650926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:35.011 [2024-11-25 13:23:32.650941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.650949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.011 [2024-11-25 13:23:32.650960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.011 [2024-11-25 13:23:32.650982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.011 [2024-11-25 13:23:32.651118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.011 [2024-11-25 13:23:32.651133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.011 [2024-11-25 13:23:32.651140] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.651147] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477690): datao=0, datal=4096, cccid=0 00:24:35.011 [2024-11-25 13:23:32.651154] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d9100) on tqpair(0x2477690): expected_datao=0, payload_size=4096 00:24:35.011 [2024-11-25 13:23:32.651162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.651172] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.651180] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.651206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.011 [2024-11-25 13:23:32.651219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.011 [2024-11-25 13:23:32.651226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.651232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.011 [2024-11-25 13:23:32.651243] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:35.011 [2024-11-25 13:23:32.651257] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:35.011 [2024-11-25 13:23:32.651266] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:35.011 [2024-11-25 13:23:32.651273] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:35.011 [2024-11-25 13:23:32.651281] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:35.011 [2024-11-25 13:23:32.651289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:35.011 [2024-11-25 13:23:32.651311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:35.011 [2024-11-25 13:23:32.651325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.011 [2024-11-25 13:23:32.651333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.651351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:35.012 [2024-11-25 13:23:32.651373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.012 [2024-11-25 13:23:32.651499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.012 [2024-11-25 13:23:32.651511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.012 [2024-11-25 13:23:32.651518] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.012 [2024-11-25 13:23:32.651535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.651560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.012 [2024-11-25 13:23:32.651570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.651593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.012 [2024-11-25 13:23:32.651603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.651625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.012 [2024-11-25 13:23:32.651635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.651663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.012 [2024-11-25 13:23:32.651672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.651691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.651704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.651712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.651722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.012 [2024-11-25 13:23:32.651745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9100, cid 0, qid 0 00:24:35.012 [2024-11-25 13:23:32.651772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9280, cid 1, qid 0 00:24:35.012 [2024-11-25 13:23:32.651780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9400, cid 2, qid 0 00:24:35.012 [2024-11-25 13:23:32.651787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.012 [2024-11-25 13:23:32.651795] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9700, cid 4, qid 0 00:24:35.012 [2024-11-25 13:23:32.651973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.012 [2024-11-25 13:23:32.651987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.012 [2024-11-25 13:23:32.651994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9700) on tqpair=0x2477690 00:24:35.012 [2024-11-25 13:23:32.652009] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:35.012 [2024-11-25 13:23:32.652018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.652033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.652044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.652054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.652079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:35.012 [2024-11-25 13:23:32.652101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9700, cid 4, qid 0 00:24:35.012 [2024-11-25 13:23:32.652227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.012 [2024-11-25 13:23:32.652240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.012 [2024-11-25 13:23:32.652247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9700) on tqpair=0x2477690 00:24:35.012 [2024-11-25 13:23:32.652325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.652357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.652377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.652397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.012 [2024-11-25 13:23:32.652419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9700, cid 4, qid 0 00:24:35.012 [2024-11-25 13:23:32.652522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.012 [2024-11-25 13:23:32.652535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.012 [2024-11-25 13:23:32.652542] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652549] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477690): datao=0, datal=4096, cccid=4 00:24:35.012 [2024-11-25 13:23:32.652556] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d9700) on tqpair(0x2477690): expected_datao=0, payload_size=4096 00:24:35.012 [2024-11-25 13:23:32.652564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652581] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652590] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.012 [2024-11-25 13:23:32.652622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.012 [2024-11-25 13:23:32.652629] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9700) on tqpair=0x2477690 00:24:35.012 [2024-11-25 13:23:32.652651] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:35.012 [2024-11-25 13:23:32.652674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.652693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:35.012 [2024-11-25 13:23:32.652707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.012 [2024-11-25 13:23:32.652715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477690) 00:24:35.012 [2024-11-25 13:23:32.652726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.012 [2024-11-25 13:23:32.652748] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9700, cid 4, qid 0 00:24:35.012 [2024-11-25 13:23:32.652867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.013 [2024-11-25 13:23:32.652880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.013 [2024-11-25 13:23:32.652888] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.652894] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477690): datao=0, datal=4096, cccid=4 00:24:35.013 [2024-11-25 13:23:32.652902] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d9700) on tqpair(0x2477690): expected_datao=0, payload_size=4096 00:24:35.013 [2024-11-25 13:23:32.652909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.652926] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.652935] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.652955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.013 [2024-11-25 13:23:32.652966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.013 [2024-11-25 13:23:32.652972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.652979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9700) on tqpair=0x2477690 00:24:35.013 [2024-11-25 13:23:32.653000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:35.013 [2024-11-25 13:23:32.653023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:35.013 [2024-11-25 13:23:32.653038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.653047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477690) 00:24:35.013 [2024-11-25 13:23:32.653057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.013 [2024-11-25 13:23:32.653079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9700, cid 4, qid 0 00:24:35.013 [2024-11-25 13:23:32.653177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.013 [2024-11-25 13:23:32.653191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.013 [2024-11-25 13:23:32.653197] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.653204] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477690): datao=0, datal=4096, cccid=4 00:24:35.013 [2024-11-25 13:23:32.653211] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d9700) on tqpair(0x2477690): expected_datao=0, payload_size=4096 00:24:35.013 [2024-11-25 13:23:32.653219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.653236] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.653244] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.653264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.013 [2024-11-25 13:23:32.653275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.013 [2024-11-25 13:23:32.653282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.013 [2024-11-25 13:23:32.653288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9700) on tqpair=0x2477690 00:24:35.274 [2024-11-25 13:23:32.657309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:35.274 [2024-11-25 13:23:32.657348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:35.274 [2024-11-25 13:23:32.657365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:35.274 [2024-11-25 13:23:32.657376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:35.274 [2024-11-25 13:23:32.657399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:35.274 [2024-11-25 13:23:32.657408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:35.274 [2024-11-25 13:23:32.657418] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:35.274 [2024-11-25 13:23:32.657425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:35.274 [2024-11-25 13:23:32.657434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:35.274 [2024-11-25 13:23:32.657453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.657462] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.657473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.274 [2024-11-25 13:23:32.657485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.657497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.657504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.657514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.274 [2024-11-25 13:23:32.657541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9700, cid 4, qid 0 00:24:35.274 [2024-11-25 13:23:32.657554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9880, cid 5, qid 0 00:24:35.274 [2024-11-25 13:23:32.657662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.274 [2024-11-25 13:23:32.657676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.274 [2024-11-25 13:23:32.657683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.657689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9700) on tqpair=0x2477690 00:24:35.274 [2024-11-25 13:23:32.657700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.274 [2024-11-25 13:23:32.657709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.274 [2024-11-25 13:23:32.657715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.657722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9880) on tqpair=0x2477690 00:24:35.274 [2024-11-25 13:23:32.657737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.657747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.657758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.274 [2024-11-25 13:23:32.657779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9880, cid 5, qid 0 00:24:35.274 [2024-11-25 13:23:32.657906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.274 [2024-11-25 13:23:32.657920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.274 [2024-11-25 13:23:32.657927] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.657933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9880) on tqpair=0x2477690 00:24:35.274 [2024-11-25 13:23:32.657949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.657958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.657969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.274 [2024-11-25 13:23:32.657990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9880, cid 5, qid 0 00:24:35.274 [2024-11-25 13:23:32.658120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.274 [2024-11-25 13:23:32.658133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.274 [2024-11-25 13:23:32.658140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.658146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9880) on tqpair=0x2477690 00:24:35.274 [2024-11-25 13:23:32.658161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.658171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.658181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.274 [2024-11-25 13:23:32.658202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9880, cid 5, qid 0 00:24:35.274 [2024-11-25 13:23:32.658330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.274 [2024-11-25 13:23:32.658345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.274 [2024-11-25 13:23:32.658352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.658363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9880) on tqpair=0x2477690 00:24:35.274 [2024-11-25 13:23:32.658388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.658399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.658410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.274 [2024-11-25 13:23:32.658423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.658431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.658441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.274 [2024-11-25 13:23:32.658453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.658461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.658470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.274 [2024-11-25 13:23:32.658483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.658491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2477690) 00:24:35.274 [2024-11-25 13:23:32.658500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.274 [2024-11-25 13:23:32.658523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9880, cid 5, qid 0 00:24:35.274 [2024-11-25 13:23:32.658535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9700, cid 4, qid 0 00:24:35.274 [2024-11-25 13:23:32.658542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9a00, cid 6, qid 0 00:24:35.274 [2024-11-25 13:23:32.658550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9b80, cid 7, qid 0 00:24:35.274 [2024-11-25 13:23:32.658719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.274 [2024-11-25 13:23:32.658733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.274 [2024-11-25 13:23:32.658740] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.274 [2024-11-25 13:23:32.658746] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477690): datao=0, datal=8192, cccid=5 00:24:35.275 [2024-11-25 13:23:32.658754] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d9880) on tqpair(0x2477690): expected_datao=0, payload_size=8192 00:24:35.275 [2024-11-25 13:23:32.658761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658798] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658809] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.275 [2024-11-25 13:23:32.658827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.275 [2024-11-25 13:23:32.658834] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658840] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477690): datao=0, datal=512, cccid=4 00:24:35.275 [2024-11-25 13:23:32.658848] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d9700) on tqpair(0x2477690): expected_datao=0, payload_size=512 00:24:35.275 [2024-11-25 13:23:32.658855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658864] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658871] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.275 [2024-11-25 13:23:32.658895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.275 [2024-11-25 13:23:32.658902] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658909] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477690): datao=0, datal=512, cccid=6 00:24:35.275 [2024-11-25 13:23:32.658916] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d9a00) on tqpair(0x2477690): expected_datao=0, payload_size=512 00:24:35.275 [2024-11-25 13:23:32.658923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658932] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658939] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:35.275 [2024-11-25 13:23:32.658957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:35.275 [2024-11-25 13:23:32.658963] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658969] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477690): datao=0, datal=4096, cccid=7 00:24:35.275 [2024-11-25 13:23:32.658977] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24d9b80) on tqpair(0x2477690): expected_datao=0, payload_size=4096 00:24:35.275 [2024-11-25 13:23:32.658984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.658994] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.659001] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.659013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.275 [2024-11-25 13:23:32.659022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.275 [2024-11-25 13:23:32.659029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.659035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9880) on tqpair=0x2477690 00:24:35.275 [2024-11-25 13:23:32.659054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.275 [2024-11-25 13:23:32.659066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.275 [2024-11-25 13:23:32.659072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.659079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9700) on tqpair=0x2477690 00:24:35.275 [2024-11-25 13:23:32.659095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.275 [2024-11-25 13:23:32.659106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.275 [2024-11-25 13:23:32.659128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.659134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9a00) on tqpair=0x2477690 00:24:35.275 [2024-11-25 13:23:32.659145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.275 [2024-11-25 13:23:32.659155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.275 [2024-11-25 13:23:32.659161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.275 [2024-11-25 13:23:32.659167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9b80) on tqpair=0x2477690 00:24:35.275 ===================================================== 00:24:35.275 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.275 ===================================================== 00:24:35.275 Controller Capabilities/Features 00:24:35.275 ================================ 00:24:35.275 Vendor ID: 8086 00:24:35.275 Subsystem Vendor ID: 8086 00:24:35.275 Serial Number: SPDK00000000000001 00:24:35.275 Model Number: SPDK bdev Controller 00:24:35.275 Firmware Version: 25.01 00:24:35.275 Recommended Arb Burst: 6 00:24:35.275 IEEE OUI Identifier: e4 d2 5c 00:24:35.275 Multi-path I/O 00:24:35.275 May have multiple subsystem ports: Yes 00:24:35.275 May have multiple controllers: Yes 00:24:35.275 Associated with SR-IOV VF: No 00:24:35.275 Max Data Transfer Size: 131072 00:24:35.275 Max Number of Namespaces: 32 00:24:35.275 Max Number of I/O Queues: 127 00:24:35.275 NVMe Specification Version (VS): 1.3 00:24:35.275 NVMe Specification Version (Identify): 1.3 00:24:35.275 Maximum Queue Entries: 128 00:24:35.275 Contiguous Queues Required: Yes 00:24:35.275 Arbitration Mechanisms Supported 00:24:35.275 Weighted Round Robin: Not Supported 00:24:35.275 Vendor Specific: Not Supported 00:24:35.275 Reset Timeout: 15000 ms 00:24:35.275 Doorbell Stride: 4 bytes 00:24:35.275 NVM Subsystem Reset: Not Supported 00:24:35.275 Command Sets Supported 00:24:35.275 NVM Command Set: Supported 00:24:35.275 Boot Partition: Not Supported 00:24:35.275 Memory Page Size Minimum: 4096 bytes 00:24:35.275 Memory Page Size Maximum: 4096 bytes 00:24:35.275 Persistent Memory Region: Not Supported 00:24:35.275 Optional Asynchronous Events Supported 00:24:35.275 Namespace Attribute Notices: Supported 00:24:35.275 Firmware Activation Notices: Not Supported 00:24:35.275 ANA Change Notices: Not Supported 00:24:35.275 PLE Aggregate Log Change Notices: Not Supported 00:24:35.275 LBA Status Info Alert Notices: Not Supported 00:24:35.275 EGE Aggregate Log Change Notices: Not Supported 00:24:35.275 Normal NVM Subsystem Shutdown event: Not Supported 00:24:35.275 Zone Descriptor Change Notices: Not Supported 00:24:35.275 Discovery Log Change Notices: Not Supported 00:24:35.275 Controller Attributes 00:24:35.275 128-bit Host Identifier: Supported 00:24:35.275 Non-Operational Permissive Mode: Not Supported 00:24:35.275 NVM Sets: Not Supported 00:24:35.275 Read Recovery Levels: Not Supported 00:24:35.275 Endurance Groups: Not Supported 00:24:35.275 Predictable Latency Mode: Not Supported 00:24:35.275 Traffic Based Keep ALive: Not Supported 00:24:35.275 Namespace Granularity: Not Supported 00:24:35.275 SQ Associations: Not Supported 00:24:35.275 UUID List: Not Supported 00:24:35.275 Multi-Domain Subsystem: Not Supported 00:24:35.275 Fixed Capacity Management: Not Supported 00:24:35.275 Variable Capacity Management: Not Supported 00:24:35.275 Delete Endurance Group: Not Supported 00:24:35.275 Delete NVM Set: Not Supported 00:24:35.275 Extended LBA Formats Supported: Not Supported 00:24:35.275 Flexible Data Placement Supported: Not Supported 00:24:35.275 00:24:35.275 Controller Memory Buffer Support 00:24:35.275 ================================ 00:24:35.275 Supported: No 00:24:35.275 00:24:35.275 Persistent Memory Region Support 00:24:35.275 ================================ 00:24:35.275 Supported: No 00:24:35.275 00:24:35.275 Admin Command Set Attributes 00:24:35.275 ============================ 00:24:35.275 Security Send/Receive: Not Supported 00:24:35.275 Format NVM: Not Supported 00:24:35.275 Firmware Activate/Download: Not Supported 00:24:35.275 Namespace Management: Not Supported 00:24:35.275 Device Self-Test: Not Supported 00:24:35.275 Directives: Not Supported 00:24:35.275 NVMe-MI: Not Supported 00:24:35.275 Virtualization Management: Not Supported 00:24:35.275 Doorbell Buffer Config: Not Supported 00:24:35.275 Get LBA Status Capability: Not Supported 00:24:35.275 Command & Feature Lockdown Capability: Not Supported 00:24:35.275 Abort Command Limit: 4 00:24:35.275 Async Event Request Limit: 4 00:24:35.275 Number of Firmware Slots: N/A 00:24:35.275 Firmware Slot 1 Read-Only: N/A 00:24:35.275 Firmware Activation Without Reset: N/A 00:24:35.275 Multiple Update Detection Support: N/A 00:24:35.275 Firmware Update Granularity: No Information Provided 00:24:35.275 Per-Namespace SMART Log: No 00:24:35.275 Asymmetric Namespace Access Log Page: Not Supported 00:24:35.275 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:35.275 Command Effects Log Page: Supported 00:24:35.275 Get Log Page Extended Data: Supported 00:24:35.275 Telemetry Log Pages: Not Supported 00:24:35.275 Persistent Event Log Pages: Not Supported 00:24:35.275 Supported Log Pages Log Page: May Support 00:24:35.275 Commands Supported & Effects Log Page: Not Supported 00:24:35.275 Feature Identifiers & Effects Log Page:May Support 00:24:35.275 NVMe-MI Commands & Effects Log Page: May Support 00:24:35.275 Data Area 4 for Telemetry Log: Not Supported 00:24:35.275 Error Log Page Entries Supported: 128 00:24:35.275 Keep Alive: Supported 00:24:35.275 Keep Alive Granularity: 10000 ms 00:24:35.275 00:24:35.275 NVM Command Set Attributes 00:24:35.275 ========================== 00:24:35.275 Submission Queue Entry Size 00:24:35.275 Max: 64 00:24:35.275 Min: 64 00:24:35.275 Completion Queue Entry Size 00:24:35.275 Max: 16 00:24:35.275 Min: 16 00:24:35.275 Number of Namespaces: 32 00:24:35.276 Compare Command: Supported 00:24:35.276 Write Uncorrectable Command: Not Supported 00:24:35.276 Dataset Management Command: Supported 00:24:35.276 Write Zeroes Command: Supported 00:24:35.276 Set Features Save Field: Not Supported 00:24:35.276 Reservations: Supported 00:24:35.276 Timestamp: Not Supported 00:24:35.276 Copy: Supported 00:24:35.276 Volatile Write Cache: Present 00:24:35.276 Atomic Write Unit (Normal): 1 00:24:35.276 Atomic Write Unit (PFail): 1 00:24:35.276 Atomic Compare & Write Unit: 1 00:24:35.276 Fused Compare & Write: Supported 00:24:35.276 Scatter-Gather List 00:24:35.276 SGL Command Set: Supported 00:24:35.276 SGL Keyed: Supported 00:24:35.276 SGL Bit Bucket Descriptor: Not Supported 00:24:35.276 SGL Metadata Pointer: Not Supported 00:24:35.276 Oversized SGL: Not Supported 00:24:35.276 SGL Metadata Address: Not Supported 00:24:35.276 SGL Offset: Supported 00:24:35.276 Transport SGL Data Block: Not Supported 00:24:35.276 Replay Protected Memory Block: Not Supported 00:24:35.276 00:24:35.276 Firmware Slot Information 00:24:35.276 ========================= 00:24:35.276 Active slot: 1 00:24:35.276 Slot 1 Firmware Revision: 25.01 00:24:35.276 00:24:35.276 00:24:35.276 Commands Supported and Effects 00:24:35.276 ============================== 00:24:35.276 Admin Commands 00:24:35.276 -------------- 00:24:35.276 Get Log Page (02h): Supported 00:24:35.276 Identify (06h): Supported 00:24:35.276 Abort (08h): Supported 00:24:35.276 Set Features (09h): Supported 00:24:35.276 Get Features (0Ah): Supported 00:24:35.276 Asynchronous Event Request (0Ch): Supported 00:24:35.276 Keep Alive (18h): Supported 00:24:35.276 I/O Commands 00:24:35.276 ------------ 00:24:35.276 Flush (00h): Supported LBA-Change 00:24:35.276 Write (01h): Supported LBA-Change 00:24:35.276 Read (02h): Supported 00:24:35.276 Compare (05h): Supported 00:24:35.276 Write Zeroes (08h): Supported LBA-Change 00:24:35.276 Dataset Management (09h): Supported LBA-Change 00:24:35.276 Copy (19h): Supported LBA-Change 00:24:35.276 00:24:35.276 Error Log 00:24:35.276 ========= 00:24:35.276 00:24:35.276 Arbitration 00:24:35.276 =========== 00:24:35.276 Arbitration Burst: 1 00:24:35.276 00:24:35.276 Power Management 00:24:35.276 ================ 00:24:35.276 Number of Power States: 1 00:24:35.276 Current Power State: Power State #0 00:24:35.276 Power State #0: 00:24:35.276 Max Power: 0.00 W 00:24:35.276 Non-Operational State: Operational 00:24:35.276 Entry Latency: Not Reported 00:24:35.276 Exit Latency: Not Reported 00:24:35.276 Relative Read Throughput: 0 00:24:35.276 Relative Read Latency: 0 00:24:35.276 Relative Write Throughput: 0 00:24:35.276 Relative Write Latency: 0 00:24:35.276 Idle Power: Not Reported 00:24:35.276 Active Power: Not Reported 00:24:35.276 Non-Operational Permissive Mode: Not Supported 00:24:35.276 00:24:35.276 Health Information 00:24:35.276 ================== 00:24:35.276 Critical Warnings: 00:24:35.276 Available Spare Space: OK 00:24:35.276 Temperature: OK 00:24:35.276 Device Reliability: OK 00:24:35.276 Read Only: No 00:24:35.276 Volatile Memory Backup: OK 00:24:35.276 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:35.276 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:35.276 Available Spare: 0% 00:24:35.276 Available Spare Threshold: 0% 00:24:35.276 Life Percentage Used:[2024-11-25 13:23:32.659309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.276 [2024-11-25 13:23:32.659338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2477690) 00:24:35.276 [2024-11-25 13:23:32.659349] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.276 [2024-11-25 13:23:32.659373] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9b80, cid 7, qid 0 00:24:35.276 [2024-11-25 13:23:32.659507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.276 [2024-11-25 13:23:32.659520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.276 [2024-11-25 13:23:32.659527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.276 [2024-11-25 13:23:32.659537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9b80) on tqpair=0x2477690 00:24:35.276 [2024-11-25 13:23:32.659579] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:35.276 [2024-11-25 13:23:32.659600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9100) on tqpair=0x2477690 00:24:35.276 [2024-11-25 13:23:32.659610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.276 [2024-11-25 13:23:32.659620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9280) on tqpair=0x2477690 00:24:35.276 [2024-11-25 13:23:32.659628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.276 [2024-11-25 13:23:32.659636] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9400) on tqpair=0x2477690 00:24:35.276 [2024-11-25 13:23:32.659644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.276 [2024-11-25 13:23:32.659652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.276 [2024-11-25 13:23:32.659660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.276 [2024-11-25 13:23:32.659673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.276 [2024-11-25 13:23:32.659681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.276 [2024-11-25 13:23:32.659688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.276 [2024-11-25 13:23:32.659699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.276 [2024-11-25 13:23:32.659722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.276 [2024-11-25 13:23:32.659807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.276 [2024-11-25 13:23:32.659821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.276 [2024-11-25 13:23:32.659828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.276 [2024-11-25 13:23:32.659835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.276 [2024-11-25 13:23:32.659846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.276 [2024-11-25 13:23:32.659855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.276 [2024-11-25 13:23:32.659862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.276 [2024-11-25 13:23:32.659872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.276 [2024-11-25 13:23:32.659899] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.276 [2024-11-25 13:23:32.659998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.276 [2024-11-25 13:23:32.660011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.276 [2024-11-25 13:23:32.660018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.277 [2024-11-25 13:23:32.660032] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:35.277 [2024-11-25 13:23:32.660040] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:35.277 [2024-11-25 13:23:32.660056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.277 [2024-11-25 13:23:32.660082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.277 [2024-11-25 13:23:32.660108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.277 [2024-11-25 13:23:32.660234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.277 [2024-11-25 13:23:32.660248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.277 [2024-11-25 13:23:32.660255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.277 [2024-11-25 13:23:32.660278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660288] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.277 [2024-11-25 13:23:32.660313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.277 [2024-11-25 13:23:32.660337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.277 [2024-11-25 13:23:32.660461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.277 [2024-11-25 13:23:32.660474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.277 [2024-11-25 13:23:32.660480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.277 [2024-11-25 13:23:32.660503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.277 [2024-11-25 13:23:32.660530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.277 [2024-11-25 13:23:32.660551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.277 [2024-11-25 13:23:32.660642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.277 [2024-11-25 13:23:32.660656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.277 [2024-11-25 13:23:32.660663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.277 [2024-11-25 13:23:32.660686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.277 [2024-11-25 13:23:32.660713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.277 [2024-11-25 13:23:32.660733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.277 [2024-11-25 13:23:32.660829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.277 [2024-11-25 13:23:32.660843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.277 [2024-11-25 13:23:32.660850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.277 [2024-11-25 13:23:32.660873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.660889] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.277 [2024-11-25 13:23:32.660899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.277 [2024-11-25 13:23:32.660920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.277 [2024-11-25 13:23:32.661053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.277 [2024-11-25 13:23:32.661067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.277 [2024-11-25 13:23:32.661074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.661081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.277 [2024-11-25 13:23:32.661097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.661107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.661114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.277 [2024-11-25 13:23:32.661125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.277 [2024-11-25 13:23:32.661145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.277 [2024-11-25 13:23:32.661270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.277 [2024-11-25 13:23:32.661282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.277 [2024-11-25 13:23:32.661289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.661296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.277 [2024-11-25 13:23:32.665340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.665353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.665360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477690) 00:24:35.277 [2024-11-25 13:23:32.665371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.277 [2024-11-25 13:23:32.665394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24d9580, cid 3, qid 0 00:24:35.277 [2024-11-25 13:23:32.665486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:35.277 [2024-11-25 13:23:32.665501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:35.277 [2024-11-25 13:23:32.665508] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:35.277 [2024-11-25 13:23:32.665515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24d9580) on tqpair=0x2477690 00:24:35.277 [2024-11-25 13:23:32.665528] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:24:35.277 0% 00:24:35.277 Data Units Read: 0 00:24:35.277 Data Units Written: 0 00:24:35.277 Host Read Commands: 0 00:24:35.277 Host Write Commands: 0 00:24:35.277 Controller Busy Time: 0 minutes 00:24:35.277 Power Cycles: 0 00:24:35.277 Power On Hours: 0 hours 00:24:35.277 Unsafe Shutdowns: 0 00:24:35.277 Unrecoverable Media Errors: 0 00:24:35.277 Lifetime Error Log Entries: 0 00:24:35.277 Warning Temperature Time: 0 minutes 00:24:35.277 Critical Temperature Time: 0 minutes 00:24:35.277 00:24:35.277 Number of Queues 00:24:35.277 ================ 00:24:35.277 Number of I/O Submission Queues: 127 00:24:35.277 Number of I/O Completion Queues: 127 00:24:35.277 00:24:35.277 Active Namespaces 00:24:35.277 ================= 00:24:35.277 Namespace ID:1 00:24:35.277 Error Recovery Timeout: Unlimited 00:24:35.277 Command Set Identifier: NVM (00h) 00:24:35.277 Deallocate: Supported 00:24:35.277 Deallocated/Unwritten Error: Not Supported 00:24:35.277 Deallocated Read Value: Unknown 00:24:35.277 Deallocate in Write Zeroes: Not Supported 00:24:35.277 Deallocated Guard Field: 0xFFFF 00:24:35.277 Flush: Supported 00:24:35.277 Reservation: Supported 00:24:35.277 Namespace Sharing Capabilities: Multiple Controllers 00:24:35.277 Size (in LBAs): 131072 (0GiB) 00:24:35.277 Capacity (in LBAs): 131072 (0GiB) 00:24:35.277 Utilization (in LBAs): 131072 (0GiB) 00:24:35.277 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:35.277 EUI64: ABCDEF0123456789 00:24:35.277 UUID: 352e0278-6941-465d-81cd-d86c23f9ab2c 00:24:35.277 Thin Provisioning: Not Supported 00:24:35.277 Per-NS Atomic Units: Yes 00:24:35.277 Atomic Boundary Size (Normal): 0 00:24:35.277 Atomic Boundary Size (PFail): 0 00:24:35.277 Atomic Boundary Offset: 0 00:24:35.277 Maximum Single Source Range Length: 65535 00:24:35.277 Maximum Copy Length: 65535 00:24:35.277 Maximum Source Range Count: 1 00:24:35.277 NGUID/EUI64 Never Reused: No 00:24:35.277 Namespace Write Protected: No 00:24:35.277 Number of LBA Formats: 1 00:24:35.277 Current LBA Format: LBA Format #00 00:24:35.277 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:35.277 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:35.277 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.278 rmmod nvme_tcp 00:24:35.278 rmmod nvme_fabrics 00:24:35.278 rmmod nvme_keyring 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3234303 ']' 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3234303 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3234303 ']' 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3234303 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3234303 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3234303' 00:24:35.278 killing process with pid 3234303 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3234303 00:24:35.278 13:23:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3234303 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.536 13:23:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.071 00:24:38.071 real 0m5.587s 00:24:38.071 user 0m4.457s 00:24:38.071 sys 0m2.007s 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:38.071 ************************************ 00:24:38.071 END TEST nvmf_identify 00:24:38.071 ************************************ 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.071 ************************************ 00:24:38.071 START TEST nvmf_perf 00:24:38.071 ************************************ 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:38.071 * Looking for test storage... 00:24:38.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.071 --rc genhtml_branch_coverage=1 00:24:38.071 --rc genhtml_function_coverage=1 00:24:38.071 --rc genhtml_legend=1 00:24:38.071 --rc geninfo_all_blocks=1 00:24:38.071 --rc geninfo_unexecuted_blocks=1 00:24:38.071 00:24:38.071 ' 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.071 --rc genhtml_branch_coverage=1 00:24:38.071 --rc genhtml_function_coverage=1 00:24:38.071 --rc genhtml_legend=1 00:24:38.071 --rc geninfo_all_blocks=1 00:24:38.071 --rc geninfo_unexecuted_blocks=1 00:24:38.071 00:24:38.071 ' 00:24:38.071 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.071 --rc genhtml_branch_coverage=1 00:24:38.071 --rc genhtml_function_coverage=1 00:24:38.071 --rc genhtml_legend=1 00:24:38.071 --rc geninfo_all_blocks=1 00:24:38.071 --rc geninfo_unexecuted_blocks=1 00:24:38.071 00:24:38.072 ' 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.072 --rc genhtml_branch_coverage=1 00:24:38.072 --rc genhtml_function_coverage=1 00:24:38.072 --rc genhtml_legend=1 00:24:38.072 --rc geninfo_all_blocks=1 00:24:38.072 --rc geninfo_unexecuted_blocks=1 00:24:38.072 00:24:38.072 ' 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:38.072 13:23:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:39.978 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:39.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:39.978 Found net devices under 0000:09:00.0: cvl_0_0 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:39.978 Found net devices under 0000:09:00.1: cvl_0_1 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.978 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:24:39.979 00:24:39.979 --- 10.0.0.2 ping statistics --- 00:24:39.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.979 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:24:39.979 00:24:39.979 --- 10.0.0.1 ping statistics --- 00:24:39.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.979 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.979 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3236387 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3236387 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3236387 ']' 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.237 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:40.237 [2024-11-25 13:23:37.699726] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:24:40.237 [2024-11-25 13:23:37.699802] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.237 [2024-11-25 13:23:37.776633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.237 [2024-11-25 13:23:37.836862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.237 [2024-11-25 13:23:37.836917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.237 [2024-11-25 13:23:37.836945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.237 [2024-11-25 13:23:37.836956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.237 [2024-11-25 13:23:37.836966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.237 [2024-11-25 13:23:37.838671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.237 [2024-11-25 13:23:37.838731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.237 [2024-11-25 13:23:37.838989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.237 [2024-11-25 13:23:37.838993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.495 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.495 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:40.495 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.495 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.495 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:40.495 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.495 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:40.495 13:23:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:43.776 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:43.776 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:43.776 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:24:43.776 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:44.034 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:44.034 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:24:44.034 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:44.034 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:44.034 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:44.292 [2024-11-25 13:23:41.920364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.292 13:23:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.857 13:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:44.857 13:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.857 13:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:44.857 13:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:45.114 13:23:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.372 [2024-11-25 13:23:43.008309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.372 13:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:45.935 13:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:24:45.935 13:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:24:45.935 13:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:45.935 13:23:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:24:46.864 Initializing NVMe Controllers 00:24:46.864 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:24:46.864 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:24:46.864 Initialization complete. Launching workers. 00:24:46.864 ======================================================== 00:24:46.864 Latency(us) 00:24:46.864 Device Information : IOPS MiB/s Average min max 00:24:46.864 PCIE (0000:0b:00.0) NSID 1 from core 0: 84231.56 329.03 379.27 28.06 5345.70 00:24:46.864 ======================================================== 00:24:46.864 Total : 84231.56 329.03 379.27 28.06 5345.70 00:24:46.864 00:24:47.162 13:23:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.532 Initializing NVMe Controllers 00:24:48.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:48.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:48.532 Initialization complete. Launching workers. 00:24:48.532 ======================================================== 00:24:48.532 Latency(us) 00:24:48.532 Device Information : IOPS MiB/s Average min max 00:24:48.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 66.00 0.26 15523.36 141.10 45759.07 00:24:48.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.00 0.28 14189.87 7938.07 47898.92 00:24:48.532 ======================================================== 00:24:48.532 Total : 137.00 0.54 14832.28 141.10 47898.92 00:24:48.532 00:24:48.532 13:23:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:49.904 Initializing NVMe Controllers 00:24:49.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:49.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:49.904 Initialization complete. Launching workers. 00:24:49.904 ======================================================== 00:24:49.904 Latency(us) 00:24:49.904 Device Information : IOPS MiB/s Average min max 00:24:49.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8094.62 31.62 3954.28 774.48 10423.61 00:24:49.904 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3980.57 15.55 8068.99 5210.20 15540.28 00:24:49.904 ======================================================== 00:24:49.904 Total : 12075.18 47.17 5310.69 774.48 15540.28 00:24:49.904 00:24:49.904 13:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:49.904 13:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:49.904 13:23:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:52.431 Initializing NVMe Controllers 00:24:52.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:52.432 Controller IO queue size 128, less than required. 00:24:52.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.432 Controller IO queue size 128, less than required. 00:24:52.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:52.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:52.432 Initialization complete. Launching workers. 00:24:52.432 ======================================================== 00:24:52.432 Latency(us) 00:24:52.432 Device Information : IOPS MiB/s Average min max 00:24:52.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1665.43 416.36 77935.87 56258.40 119342.08 00:24:52.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.13 144.53 235004.38 69288.18 365823.80 00:24:52.432 ======================================================== 00:24:52.432 Total : 2243.56 560.89 118409.87 56258.40 365823.80 00:24:52.432 00:24:52.432 13:23:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:52.689 No valid NVMe controllers or AIO or URING devices found 00:24:52.689 Initializing NVMe Controllers 00:24:52.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:52.689 Controller IO queue size 128, less than required. 00:24:52.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.689 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:52.689 Controller IO queue size 128, less than required. 00:24:52.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:52.689 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:52.689 WARNING: Some requested NVMe devices were skipped 00:24:52.689 13:23:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:55.220 Initializing NVMe Controllers 00:24:55.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.220 Controller IO queue size 128, less than required. 00:24:55.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:55.220 Controller IO queue size 128, less than required. 00:24:55.220 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:55.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:55.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:55.220 Initialization complete. Launching workers. 00:24:55.220 00:24:55.220 ==================== 00:24:55.220 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:55.220 TCP transport: 00:24:55.220 polls: 9725 00:24:55.220 idle_polls: 6400 00:24:55.220 sock_completions: 3325 00:24:55.220 nvme_completions: 6163 00:24:55.220 submitted_requests: 9324 00:24:55.220 queued_requests: 1 00:24:55.220 00:24:55.220 ==================== 00:24:55.220 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:55.220 TCP transport: 00:24:55.220 polls: 13231 00:24:55.220 idle_polls: 9903 00:24:55.220 sock_completions: 3328 00:24:55.220 nvme_completions: 5945 00:24:55.220 submitted_requests: 8964 00:24:55.220 queued_requests: 1 00:24:55.220 ======================================================== 00:24:55.220 Latency(us) 00:24:55.221 Device Information : IOPS MiB/s Average min max 00:24:55.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1538.85 384.71 84222.41 46104.51 131953.70 00:24:55.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1484.41 371.10 87075.51 48944.01 142653.21 00:24:55.221 ======================================================== 00:24:55.221 Total : 3023.27 755.82 85623.27 46104.51 142653.21 00:24:55.221 00:24:55.221 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:55.221 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:55.480 13:23:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:55.480 rmmod nvme_tcp 00:24:55.480 rmmod nvme_fabrics 00:24:55.480 rmmod nvme_keyring 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3236387 ']' 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3236387 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3236387 ']' 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3236387 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3236387 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3236387' 00:24:55.480 killing process with pid 3236387 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3236387 00:24:55.480 13:23:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3236387 00:24:57.379 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.379 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.379 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.380 13:23:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.286 00:24:59.286 real 0m21.499s 00:24:59.286 user 1m5.484s 00:24:59.286 sys 0m5.755s 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:59.286 ************************************ 00:24:59.286 END TEST nvmf_perf 00:24:59.286 ************************************ 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.286 ************************************ 00:24:59.286 START TEST nvmf_fio_host 00:24:59.286 ************************************ 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:59.286 * Looking for test storage... 00:24:59.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:59.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.286 --rc genhtml_branch_coverage=1 00:24:59.286 --rc genhtml_function_coverage=1 00:24:59.286 --rc genhtml_legend=1 00:24:59.286 --rc geninfo_all_blocks=1 00:24:59.286 --rc geninfo_unexecuted_blocks=1 00:24:59.286 00:24:59.286 ' 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:59.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.286 --rc genhtml_branch_coverage=1 00:24:59.286 --rc genhtml_function_coverage=1 00:24:59.286 --rc genhtml_legend=1 00:24:59.286 --rc geninfo_all_blocks=1 00:24:59.286 --rc geninfo_unexecuted_blocks=1 00:24:59.286 00:24:59.286 ' 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:59.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.286 --rc genhtml_branch_coverage=1 00:24:59.286 --rc genhtml_function_coverage=1 00:24:59.286 --rc genhtml_legend=1 00:24:59.286 --rc geninfo_all_blocks=1 00:24:59.286 --rc geninfo_unexecuted_blocks=1 00:24:59.286 00:24:59.286 ' 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:59.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.286 --rc genhtml_branch_coverage=1 00:24:59.286 --rc genhtml_function_coverage=1 00:24:59.286 --rc genhtml_legend=1 00:24:59.286 --rc geninfo_all_blocks=1 00:24:59.286 --rc geninfo_unexecuted_blocks=1 00:24:59.286 00:24:59.286 ' 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.286 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.287 13:23:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.817 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.817 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.817 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.817 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.817 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:01.818 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:01.818 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:01.818 Found net devices under 0000:09:00.0: cvl_0_0 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:01.818 Found net devices under 0000:09:00.1: cvl_0_1 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:25:01.818 00:25:01.818 --- 10.0.0.2 ping statistics --- 00:25:01.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.818 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:25:01.818 00:25:01.818 --- 10.0.0.1 ping statistics --- 00:25:01.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.818 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:01.818 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3240361 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3240361 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3240361 ']' 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.819 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.819 [2024-11-25 13:23:59.261001] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:25:01.819 [2024-11-25 13:23:59.261083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.819 [2024-11-25 13:23:59.333902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.819 [2024-11-25 13:23:59.393153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.819 [2024-11-25 13:23:59.393203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.819 [2024-11-25 13:23:59.393231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.819 [2024-11-25 13:23:59.393242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.819 [2024-11-25 13:23:59.393251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.819 [2024-11-25 13:23:59.394941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.819 [2024-11-25 13:23:59.395039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.819 [2024-11-25 13:23:59.395134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.819 [2024-11-25 13:23:59.395142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.076 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.076 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:02.076 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:02.333 [2024-11-25 13:23:59.768981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.334 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:02.334 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:02.334 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.334 13:23:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:02.591 Malloc1 00:25:02.591 13:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.848 13:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:03.105 13:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.363 [2024-11-25 13:24:00.948452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.364 13:24:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:03.622 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:03.880 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:03.880 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:03.880 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:03.880 13:24:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.880 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:03.880 fio-3.35 00:25:03.880 Starting 1 thread 00:25:06.486 00:25:06.486 test: (groupid=0, jobs=1): err= 0: pid=3240945: Mon Nov 25 13:24:03 2024 00:25:06.486 read: IOPS=8814, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2007msec) 00:25:06.486 slat (usec): min=2, max=136, avg= 2.63, stdev= 1.65 00:25:06.486 clat (usec): min=2383, max=15034, avg=7894.58, stdev=680.42 00:25:06.486 lat (usec): min=2405, max=15036, avg=7897.22, stdev=680.32 00:25:06.486 clat percentiles (usec): 00:25:06.486 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:25:06.486 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8029], 00:25:06.486 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:25:06.486 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[12780], 99.95th=[14484], 00:25:06.486 | 99.99th=[14615] 00:25:06.486 bw ( KiB/s): min=34272, max=36048, per=99.97%, avg=35248.00, stdev=744.87, samples=4 00:25:06.486 iops : min= 8568, max= 9012, avg=8812.00, stdev=186.22, samples=4 00:25:06.486 write: IOPS=8827, BW=34.5MiB/s (36.2MB/s)(69.2MiB/2007msec); 0 zone resets 00:25:06.486 slat (nsec): min=2224, max=91244, avg=2758.82, stdev=1265.86 00:25:06.486 clat (usec): min=1004, max=12726, avg=6557.55, stdev=557.35 00:25:06.486 lat (usec): min=1010, max=12728, avg=6560.31, stdev=557.31 00:25:06.486 clat percentiles (usec): 00:25:06.486 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:25:06.486 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:25:06.486 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:25:06.486 | 99.00th=[ 7767], 99.50th=[ 7898], 99.90th=[10945], 99.95th=[12125], 00:25:06.486 | 99.99th=[12649] 00:25:06.486 bw ( KiB/s): min=35144, max=35608, per=100.00%, avg=35314.00, stdev=202.74, samples=4 00:25:06.486 iops : min= 8786, max= 8902, avg=8828.50, stdev=50.69, samples=4 00:25:06.486 lat (msec) : 2=0.03%, 4=0.11%, 10=99.69%, 20=0.18% 00:25:06.486 cpu : usr=66.15%, sys=32.10%, ctx=80, majf=0, minf=31 00:25:06.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:06.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:06.486 issued rwts: total=17691,17717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.486 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:06.486 00:25:06.486 Run status group 0 (all jobs): 00:25:06.486 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.5MB), run=2007-2007msec 00:25:06.486 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=69.2MiB (72.6MB), run=2007-2007msec 00:25:06.486 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:06.486 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:06.486 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:06.486 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:06.487 13:24:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:06.487 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:06.487 fio-3.35 00:25:06.487 Starting 1 thread 00:25:09.014 00:25:09.014 test: (groupid=0, jobs=1): err= 0: pid=3241294: Mon Nov 25 13:24:06 2024 00:25:09.014 read: IOPS=8291, BW=130MiB/s (136MB/s)(260MiB/2008msec) 00:25:09.014 slat (nsec): min=2966, max=96682, avg=3743.59, stdev=1634.21 00:25:09.014 clat (usec): min=2056, max=17604, avg=8769.43, stdev=2137.82 00:25:09.014 lat (usec): min=2060, max=17608, avg=8773.17, stdev=2137.87 00:25:09.014 clat percentiles (usec): 00:25:09.014 | 1.00th=[ 4490], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 6980], 00:25:09.014 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9241], 00:25:09.014 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11469], 95.00th=[12649], 00:25:09.014 | 99.00th=[14353], 99.50th=[15008], 99.90th=[16909], 99.95th=[17433], 00:25:09.014 | 99.99th=[17433] 00:25:09.014 bw ( KiB/s): min=61568, max=81728, per=52.83%, avg=70088.00, stdev=9893.48, samples=4 00:25:09.014 iops : min= 3848, max= 5108, avg=4380.50, stdev=618.34, samples=4 00:25:09.014 write: IOPS=4997, BW=78.1MiB/s (81.9MB/s)(143MiB/1834msec); 0 zone resets 00:25:09.014 slat (usec): min=31, max=185, avg=34.27, stdev= 5.19 00:25:09.014 clat (usec): min=5582, max=20163, avg=11455.70, stdev=1891.90 00:25:09.014 lat (usec): min=5614, max=20196, avg=11489.97, stdev=1891.96 00:25:09.014 clat percentiles (usec): 00:25:09.014 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9896], 00:25:09.014 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:25:09.014 | 70.00th=[12387], 80.00th=[13042], 90.00th=[13960], 95.00th=[14746], 00:25:09.014 | 99.00th=[16319], 99.50th=[17171], 99.90th=[17695], 99.95th=[18482], 00:25:09.014 | 99.99th=[20055] 00:25:09.014 bw ( KiB/s): min=63296, max=84416, per=90.99%, avg=72760.00, stdev=10504.79, samples=4 00:25:09.014 iops : min= 3956, max= 5276, avg=4547.50, stdev=656.55, samples=4 00:25:09.014 lat (msec) : 4=0.21%, 10=54.54%, 20=45.24%, 50=0.01% 00:25:09.014 cpu : usr=77.53%, sys=21.23%, ctx=46, majf=0, minf=57 00:25:09.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:09.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:09.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:09.015 issued rwts: total=16650,9166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:09.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:09.015 00:25:09.015 Run status group 0 (all jobs): 00:25:09.015 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=260MiB (273MB), run=2008-2008msec 00:25:09.015 WRITE: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=143MiB (150MB), run=1834-1834msec 00:25:09.015 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.272 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.272 rmmod nvme_tcp 00:25:09.272 rmmod nvme_fabrics 00:25:09.272 rmmod nvme_keyring 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3240361 ']' 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3240361 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3240361 ']' 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3240361 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3240361 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3240361' 00:25:09.530 killing process with pid 3240361 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3240361 00:25:09.530 13:24:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3240361 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.790 13:24:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.693 13:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.693 00:25:11.693 real 0m12.557s 00:25:11.693 user 0m37.222s 00:25:11.693 sys 0m4.086s 00:25:11.693 13:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.693 13:24:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.693 ************************************ 00:25:11.693 END TEST nvmf_fio_host 00:25:11.693 ************************************ 00:25:11.693 13:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:11.693 13:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.693 13:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.693 13:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.693 ************************************ 00:25:11.693 START TEST nvmf_failover 00:25:11.693 ************************************ 00:25:11.693 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:11.951 * Looking for test storage... 00:25:11.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:11.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.952 --rc genhtml_branch_coverage=1 00:25:11.952 --rc genhtml_function_coverage=1 00:25:11.952 --rc genhtml_legend=1 00:25:11.952 --rc geninfo_all_blocks=1 00:25:11.952 --rc geninfo_unexecuted_blocks=1 00:25:11.952 00:25:11.952 ' 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:11.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.952 --rc genhtml_branch_coverage=1 00:25:11.952 --rc genhtml_function_coverage=1 00:25:11.952 --rc genhtml_legend=1 00:25:11.952 --rc geninfo_all_blocks=1 00:25:11.952 --rc geninfo_unexecuted_blocks=1 00:25:11.952 00:25:11.952 ' 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:11.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.952 --rc genhtml_branch_coverage=1 00:25:11.952 --rc genhtml_function_coverage=1 00:25:11.952 --rc genhtml_legend=1 00:25:11.952 --rc geninfo_all_blocks=1 00:25:11.952 --rc geninfo_unexecuted_blocks=1 00:25:11.952 00:25:11.952 ' 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:11.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.952 --rc genhtml_branch_coverage=1 00:25:11.952 --rc genhtml_function_coverage=1 00:25:11.952 --rc genhtml_legend=1 00:25:11.952 --rc geninfo_all_blocks=1 00:25:11.952 --rc geninfo_unexecuted_blocks=1 00:25:11.952 00:25:11.952 ' 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.952 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.953 13:24:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.483 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.483 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.483 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.483 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:14.484 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:14.484 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:14.484 Found net devices under 0000:09:00.0: cvl_0_0 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:14.484 Found net devices under 0000:09:00.1: cvl_0_1 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.484 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:25:14.485 00:25:14.485 --- 10.0.0.2 ping statistics --- 00:25:14.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.485 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:25:14.485 00:25:14.485 --- 10.0.0.1 ping statistics --- 00:25:14.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.485 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3244115 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3244115 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3244115 ']' 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.485 13:24:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.485 [2024-11-25 13:24:11.773701] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:25:14.485 [2024-11-25 13:24:11.773780] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.485 [2024-11-25 13:24:11.842454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:14.485 [2024-11-25 13:24:11.896676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.485 [2024-11-25 13:24:11.896743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.485 [2024-11-25 13:24:11.896771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.485 [2024-11-25 13:24:11.896782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.485 [2024-11-25 13:24:11.896791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.485 [2024-11-25 13:24:11.898354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.485 [2024-11-25 13:24:11.898407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.485 [2024-11-25 13:24:11.898411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.485 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.485 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:14.485 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.485 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.485 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:14.485 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.485 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.743 [2024-11-25 13:24:12.283412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.743 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:15.000 Malloc0 00:25:15.000 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.258 13:24:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.516 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.775 [2024-11-25 13:24:13.386168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.775 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.033 [2024-11-25 13:24:13.650980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.033 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.291 [2024-11-25 13:24:13.928023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.291 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3244406 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3244406 /var/tmp/bdevperf.sock 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3244406 ']' 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.549 13:24:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.807 13:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.807 13:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:16.807 13:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.065 NVMe0n1 00:25:17.065 13:24:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.632 00:25:17.632 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3244472 00:25:17.632 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.632 13:24:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:18.566 13:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.824 [2024-11-25 13:24:16.273965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.824 [2024-11-25 13:24:16.274208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 [2024-11-25 13:24:16.274942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b490 is same with the state(6) to be set 00:25:18.825 13:24:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:22.109 13:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:22.368 00:25:22.368 13:24:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.627 13:24:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:25.911 13:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.911 [2024-11-25 13:24:23.300073] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.911 13:24:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:26.846 13:24:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:27.104 [2024-11-25 13:24:24.588347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 [2024-11-25 13:24:24.588398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 [2024-11-25 13:24:24.588413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 [2024-11-25 13:24:24.588441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 [2024-11-25 13:24:24.588454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 [2024-11-25 13:24:24.588466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 [2024-11-25 13:24:24.588477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 [2024-11-25 13:24:24.588488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 [2024-11-25 13:24:24.588500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01260 is same with the state(6) to be set 00:25:27.104 13:24:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3244472 00:25:33.669 { 00:25:33.670 "results": [ 00:25:33.670 { 00:25:33.670 "job": "NVMe0n1", 00:25:33.670 "core_mask": "0x1", 00:25:33.670 "workload": "verify", 00:25:33.670 "status": "finished", 00:25:33.670 "verify_range": { 00:25:33.670 "start": 0, 00:25:33.670 "length": 16384 00:25:33.670 }, 00:25:33.670 "queue_depth": 128, 00:25:33.670 "io_size": 4096, 00:25:33.670 "runtime": 15.013661, 00:25:33.670 "iops": 8445.908030026787, 00:25:33.670 "mibps": 32.99182824229214, 00:25:33.670 "io_failed": 8125, 00:25:33.670 "io_timeout": 0, 00:25:33.670 "avg_latency_us": 14214.633382989077, 00:25:33.670 "min_latency_us": 521.8607407407408, 00:25:33.670 "max_latency_us": 17087.905185185184 00:25:33.670 } 00:25:33.670 ], 00:25:33.670 "core_count": 1 00:25:33.670 } 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3244406 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3244406 ']' 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3244406 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3244406 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3244406' 00:25:33.670 killing process with pid 3244406 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3244406 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3244406 00:25:33.670 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:33.670 [2024-11-25 13:24:13.993803] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:25:33.670 [2024-11-25 13:24:13.993892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244406 ] 00:25:33.670 [2024-11-25 13:24:14.061297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.670 [2024-11-25 13:24:14.120355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.670 Running I/O for 15 seconds... 00:25:33.670 8531.00 IOPS, 33.32 MiB/s [2024-11-25T12:24:31.329Z] [2024-11-25 13:24:16.275356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.275974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.670 [2024-11-25 13:24:16.275987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.670 [2024-11-25 13:24:16.276001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.276975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.276988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.671 [2024-11-25 13:24:16.277002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.671 [2024-11-25 13:24:16.277015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.672 [2024-11-25 13:24:16.277510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.672 [2024-11-25 13:24:16.277912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.672 [2024-11-25 13:24:16.277926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.277943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.277958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.277971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.277985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.277997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.673 [2024-11-25 13:24:16.278811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.673 [2024-11-25 13:24:16.278825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.278837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.278851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.278864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.278878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.278890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.278904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.278917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.278931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.278943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.278958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.278970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.278984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.278997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.279027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.279054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:16.279081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.674 [2024-11-25 13:24:16.279130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.674 [2024-11-25 13:24:16.279141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79560 len:8 PRP1 0x0 PRP2 0x0 00:25:33.674 [2024-11-25 13:24:16.279160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279223] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:33.674 [2024-11-25 13:24:16.279272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.674 [2024-11-25 13:24:16.279291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.674 [2024-11-25 13:24:16.279340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.674 [2024-11-25 13:24:16.279368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.674 [2024-11-25 13:24:16.279394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:16.279407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:33.674 [2024-11-25 13:24:16.282722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:33.674 [2024-11-25 13:24:16.282758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8d640 (9): Bad file descriptor 00:25:33.674 [2024-11-25 13:24:16.385250] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:33.674 8066.50 IOPS, 31.51 MiB/s [2024-11-25T12:24:31.333Z] 8249.67 IOPS, 32.23 MiB/s [2024-11-25T12:24:31.333Z] 8341.00 IOPS, 32.58 MiB/s [2024-11-25T12:24:31.333Z] [2024-11-25 13:24:20.042840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.674 [2024-11-25 13:24:20.042900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.042919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.674 [2024-11-25 13:24:20.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.042957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.674 [2024-11-25 13:24:20.042971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.042984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.674 [2024-11-25 13:24:20.042997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.043010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8d640 is same with the state(6) to be set 00:25:33.674 [2024-11-25 13:24:20.043076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:20.043096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.043121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:20.043137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.043152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:20.043166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.043181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:20.043195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.043209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:20.043222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.043237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:20.043251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.043265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.674 [2024-11-25 13:24:20.043279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.674 [2024-11-25 13:24:20.043293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.043976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.043991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.044004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.044019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.044033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.044048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.044064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.044091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.044115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.044136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.044157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.675 [2024-11-25 13:24:20.044178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.675 [2024-11-25 13:24:20.044205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.044954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.044982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.676 [2024-11-25 13:24:20.045701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.676 [2024-11-25 13:24:20.045715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.045982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.045995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.677 [2024-11-25 13:24:20.046498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.677 [2024-11-25 13:24:20.046644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.677 [2024-11-25 13:24:20.046659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.046980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.046998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.678 [2024-11-25 13:24:20.047209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.678 [2024-11-25 13:24:20.047236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.678 [2024-11-25 13:24:20.047456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.678 [2024-11-25 13:24:20.047500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.678 [2024-11-25 13:24:20.047520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102928 len:8 PRP1 0x0 PRP2 0x0 00:25:33.678 [2024-11-25 13:24:20.047534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.678 [2024-11-25 13:24:20.047595] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:33.678 [2024-11-25 13:24:20.047614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:33.678 [2024-11-25 13:24:20.050853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:33.678 [2024-11-25 13:24:20.050892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8d640 (9): Bad file descriptor 00:25:33.678 [2024-11-25 13:24:20.115983] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:33.678 8279.20 IOPS, 32.34 MiB/s [2024-11-25T12:24:31.337Z] 8329.50 IOPS, 32.54 MiB/s [2024-11-25T12:24:31.337Z] 8372.29 IOPS, 32.70 MiB/s [2024-11-25T12:24:31.337Z] 8404.62 IOPS, 32.83 MiB/s [2024-11-25T12:24:31.338Z] 8417.22 IOPS, 32.88 MiB/s [2024-11-25T12:24:31.338Z] [2024-11-25 13:24:24.589705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.589747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.589774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.589789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.589805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.589819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.589834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.589847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.589862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.589881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.589897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.589911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.589941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.589954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.589968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.589981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.589995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.679 [2024-11-25 13:24:24.590643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.679 [2024-11-25 13:24:24.590658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.590979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.590995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.680 [2024-11-25 13:24:24.591344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.680 [2024-11-25 13:24:24.591378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.680 [2024-11-25 13:24:24.591406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.680 [2024-11-25 13:24:24.591435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.680 [2024-11-25 13:24:24.591462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.680 [2024-11-25 13:24:24.591490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.680 [2024-11-25 13:24:24.591504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.681 [2024-11-25 13:24:24.591573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.681 [2024-11-25 13:24:24.591600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.591985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.591998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.592024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.592050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.592081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.592108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.592134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.592161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.592188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.681 [2024-11-25 13:24:24.592215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.681 [2024-11-25 13:24:24.592229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.592977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.592991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.593004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.593019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.593032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.593046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.593059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.593074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.593087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.682 [2024-11-25 13:24:24.593101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.682 [2024-11-25 13:24:24.593114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.683 [2024-11-25 13:24:24.593416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.683 [2024-11-25 13:24:24.593467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.683 [2024-11-25 13:24:24.593479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39200 len:8 PRP1 0x0 PRP2 0x0 00:25:33.683 [2024-11-25 13:24:24.593492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593556] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:33.683 [2024-11-25 13:24:24.593594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.683 [2024-11-25 13:24:24.593612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.683 [2024-11-25 13:24:24.593645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.683 [2024-11-25 13:24:24.593672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.683 [2024-11-25 13:24:24.593699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.683 [2024-11-25 13:24:24.593711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:33.683 [2024-11-25 13:24:24.593751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8d640 (9): Bad file descriptor 00:25:33.683 [2024-11-25 13:24:24.596989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:33.683 [2024-11-25 13:24:24.620120] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:33.683 8415.30 IOPS, 32.87 MiB/s [2024-11-25T12:24:31.342Z] 8434.00 IOPS, 32.95 MiB/s [2024-11-25T12:24:31.342Z] 8450.67 IOPS, 33.01 MiB/s [2024-11-25T12:24:31.342Z] 8455.85 IOPS, 33.03 MiB/s [2024-11-25T12:24:31.342Z] 8450.50 IOPS, 33.01 MiB/s [2024-11-25T12:24:31.342Z] 8445.13 IOPS, 32.99 MiB/s 00:25:33.683 Latency(us) 00:25:33.683 [2024-11-25T12:24:31.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.683 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:33.683 Verification LBA range: start 0x0 length 0x4000 00:25:33.683 NVMe0n1 : 15.01 8445.91 32.99 541.17 0.00 14214.63 521.86 17087.91 00:25:33.683 [2024-11-25T12:24:31.342Z] =================================================================================================================== 00:25:33.683 [2024-11-25T12:24:31.342Z] Total : 8445.91 32.99 541.17 0.00 14214.63 521.86 17087.91 00:25:33.683 Received shutdown signal, test time was about 15.000000 seconds 00:25:33.683 00:25:33.683 Latency(us) 00:25:33.683 [2024-11-25T12:24:31.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.683 [2024-11-25T12:24:31.342Z] =================================================================================================================== 00:25:33.683 [2024-11-25T12:24:31.342Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3246262 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3246262 /var/tmp/bdevperf.sock 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3246262 ']' 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:33.683 13:24:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:33.684 [2024-11-25 13:24:31.017360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:33.684 13:24:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:33.684 [2024-11-25 13:24:31.318284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:33.942 13:24:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.200 NVMe0n1 00:25:34.200 13:24:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.457 00:25:34.457 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:35.023 00:25:35.023 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:35.023 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:35.023 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.588 13:24:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:38.940 13:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.940 13:24:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:38.940 13:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3246934 00:25:38.940 13:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:38.940 13:24:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3246934 00:25:39.887 { 00:25:39.887 "results": [ 00:25:39.887 { 00:25:39.887 "job": "NVMe0n1", 00:25:39.887 "core_mask": "0x1", 00:25:39.887 "workload": "verify", 00:25:39.887 "status": "finished", 00:25:39.887 "verify_range": { 00:25:39.887 "start": 0, 00:25:39.887 "length": 16384 00:25:39.887 }, 00:25:39.887 "queue_depth": 128, 00:25:39.887 "io_size": 4096, 00:25:39.887 "runtime": 1.012015, 00:25:39.887 "iops": 8413.906908494439, 00:25:39.887 "mibps": 32.8668238613064, 00:25:39.887 "io_failed": 0, 00:25:39.887 "io_timeout": 0, 00:25:39.887 "avg_latency_us": 15150.272435658206, 00:25:39.887 "min_latency_us": 2985.528888888889, 00:25:39.887 "max_latency_us": 15437.368888888888 00:25:39.887 } 00:25:39.887 ], 00:25:39.887 "core_count": 1 00:25:39.887 } 00:25:39.887 13:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.887 [2024-11-25 13:24:30.518377] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:25:39.887 [2024-11-25 13:24:30.518471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246262 ] 00:25:39.888 [2024-11-25 13:24:30.587417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.888 [2024-11-25 13:24:30.645002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.888 [2024-11-25 13:24:32.929861] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:39.888 [2024-11-25 13:24:32.929939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.888 [2024-11-25 13:24:32.929962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.888 [2024-11-25 13:24:32.929994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.888 [2024-11-25 13:24:32.930007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.888 [2024-11-25 13:24:32.930021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.888 [2024-11-25 13:24:32.930034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.888 [2024-11-25 13:24:32.930049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.888 [2024-11-25 13:24:32.930062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.888 [2024-11-25 13:24:32.930075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:39.888 [2024-11-25 13:24:32.930119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:39.888 [2024-11-25 13:24:32.930150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x689640 (9): Bad file descriptor 00:25:39.888 [2024-11-25 13:24:32.934753] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:39.888 Running I/O for 1 seconds... 00:25:39.888 8386.00 IOPS, 32.76 MiB/s 00:25:39.888 Latency(us) 00:25:39.888 [2024-11-25T12:24:37.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.888 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:39.888 Verification LBA range: start 0x0 length 0x4000 00:25:39.888 NVMe0n1 : 1.01 8413.91 32.87 0.00 0.00 15150.27 2985.53 15437.37 00:25:39.888 [2024-11-25T12:24:37.547Z] =================================================================================================================== 00:25:39.888 [2024-11-25T12:24:37.547Z] Total : 8413.91 32.87 0.00 0.00 15150.27 2985.53 15437.37 00:25:39.888 13:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.888 13:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:40.147 13:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.405 13:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:40.405 13:24:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:40.662 13:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.919 13:24:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:44.198 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.198 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:44.198 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3246262 00:25:44.198 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3246262 ']' 00:25:44.198 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3246262 00:25:44.198 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:44.198 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.198 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3246262 00:25:44.455 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.455 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.455 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3246262' 00:25:44.455 killing process with pid 3246262 00:25:44.455 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3246262 00:25:44.455 13:24:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3246262 00:25:44.455 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:44.455 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.712 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.712 rmmod nvme_tcp 00:25:44.712 rmmod nvme_fabrics 00:25:44.970 rmmod nvme_keyring 00:25:44.970 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.970 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:44.970 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:44.970 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3244115 ']' 00:25:44.970 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3244115 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3244115 ']' 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3244115 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3244115 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3244115' 00:25:44.971 killing process with pid 3244115 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3244115 00:25:44.971 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3244115 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.230 13:24:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.133 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.133 00:25:47.133 real 0m35.434s 00:25:47.133 user 2m5.286s 00:25:47.133 sys 0m5.790s 00:25:47.133 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.133 13:24:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:47.133 ************************************ 00:25:47.133 END TEST nvmf_failover 00:25:47.133 ************************************ 00:25:47.133 13:24:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:47.133 13:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.133 13:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.133 13:24:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.392 ************************************ 00:25:47.392 START TEST nvmf_host_discovery 00:25:47.392 ************************************ 00:25:47.392 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:47.392 * Looking for test storage... 00:25:47.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.392 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.392 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.393 --rc genhtml_branch_coverage=1 00:25:47.393 --rc genhtml_function_coverage=1 00:25:47.393 --rc genhtml_legend=1 00:25:47.393 --rc geninfo_all_blocks=1 00:25:47.393 --rc geninfo_unexecuted_blocks=1 00:25:47.393 00:25:47.393 ' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.393 --rc genhtml_branch_coverage=1 00:25:47.393 --rc genhtml_function_coverage=1 00:25:47.393 --rc genhtml_legend=1 00:25:47.393 --rc geninfo_all_blocks=1 00:25:47.393 --rc geninfo_unexecuted_blocks=1 00:25:47.393 00:25:47.393 ' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.393 --rc genhtml_branch_coverage=1 00:25:47.393 --rc genhtml_function_coverage=1 00:25:47.393 --rc genhtml_legend=1 00:25:47.393 --rc geninfo_all_blocks=1 00:25:47.393 --rc geninfo_unexecuted_blocks=1 00:25:47.393 00:25:47.393 ' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.393 --rc genhtml_branch_coverage=1 00:25:47.393 --rc genhtml_function_coverage=1 00:25:47.393 --rc genhtml_legend=1 00:25:47.393 --rc geninfo_all_blocks=1 00:25:47.393 --rc geninfo_unexecuted_blocks=1 00:25:47.393 00:25:47.393 ' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:47.393 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.394 13:24:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.926 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:49.927 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:49.927 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:49.927 Found net devices under 0000:09:00.0: cvl_0_0 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:49.927 Found net devices under 0000:09:00.1: cvl_0_1 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:25:49.927 00:25:49.927 --- 10.0.0.2 ping statistics --- 00:25:49.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.927 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:25:49.927 00:25:49.927 --- 10.0.0.1 ping statistics --- 00:25:49.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.927 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3249686 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3249686 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3249686 ']' 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.927 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.927 [2024-11-25 13:24:47.360444] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:25:49.927 [2024-11-25 13:24:47.360539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.927 [2024-11-25 13:24:47.434146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.927 [2024-11-25 13:24:47.490281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.927 [2024-11-25 13:24:47.490355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.927 [2024-11-25 13:24:47.490380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.927 [2024-11-25 13:24:47.490391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.927 [2024-11-25 13:24:47.490401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.927 [2024-11-25 13:24:47.490991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 [2024-11-25 13:24:47.637800] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 [2024-11-25 13:24:47.645991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 null0 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 null1 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3249717 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3249717 /tmp/host.sock 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3249717 ']' 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:50.186 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.186 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.186 [2024-11-25 13:24:47.725053] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:25:50.186 [2024-11-25 13:24:47.725131] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249717 ] 00:25:50.186 [2024-11-25 13:24:47.798492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.444 [2024-11-25 13:24:47.861507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.444 13:24:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.444 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.445 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 [2024-11-25 13:24:48.279639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.703 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:50.961 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:50.962 13:24:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:51.527 [2024-11-25 13:24:49.014888] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:51.528 [2024-11-25 13:24:49.014915] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:51.528 [2024-11-25 13:24:49.014939] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:51.528 [2024-11-25 13:24:49.101225] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:51.785 [2024-11-25 13:24:49.196057] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:51.785 [2024-11-25 13:24:49.197174] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1534080:1 started. 00:25:51.785 [2024-11-25 13:24:49.199001] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:51.785 [2024-11-25 13:24:49.199022] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:51.785 [2024-11-25 13:24:49.203848] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1534080 was disconnected and freed. delete nvme_qpair. 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:52.042 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.043 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.302 [2024-11-25 13:24:49.812643] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1534540:1 started. 00:25:52.302 [2024-11-25 13:24:49.816078] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1534540 was disconnected and freed. delete nvme_qpair. 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.302 [2024-11-25 13:24:49.884513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:52.302 [2024-11-25 13:24:49.885464] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:52.302 [2024-11-25 13:24:49.885511] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.302 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.560 [2024-11-25 13:24:49.972207] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:52.560 13:24:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.560 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:52.560 13:24:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:52.818 [2024-11-25 13:24:50.232723] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:52.818 [2024-11-25 13:24:50.232808] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:52.818 [2024-11-25 13:24:50.232826] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:52.818 [2024-11-25 13:24:50.232835] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:53.384 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.644 [2024-11-25 13:24:51.112268] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:53.644 [2024-11-25 13:24:51.112318] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:53.644 [2024-11-25 13:24:51.117576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.644 [2024-11-25 13:24:51.117629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.644 [2024-11-25 13:24:51.117647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.644 [2024-11-25 13:24:51.117660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.644 [2024-11-25 13:24:51.117673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.644 [2024-11-25 13:24:51.117697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.644 [2024-11-25 13:24:51.117711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.644 [2024-11-25 13:24:51.117724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.644 [2024-11-25 13:24:51.117737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504650 is same with the state(6) to be set 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.644 [2024-11-25 13:24:51.127572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504650 (9): Bad file descriptor 00:25:53.644 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.644 [2024-11-25 13:24:51.137624] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.644 [2024-11-25 13:24:51.137669] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.644 [2024-11-25 13:24:51.137687] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.644 [2024-11-25 13:24:51.137695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.645 [2024-11-25 13:24:51.137727] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.137983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.645 [2024-11-25 13:24:51.138015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504650 with addr=10.0.0.2, port=4420 00:25:53.645 [2024-11-25 13:24:51.138032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504650 is same with the state(6) to be set 00:25:53.645 [2024-11-25 13:24:51.138055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504650 (9): Bad file descriptor 00:25:53.645 [2024-11-25 13:24:51.138090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.645 [2024-11-25 13:24:51.138108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.645 [2024-11-25 13:24:51.138124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.645 [2024-11-25 13:24:51.138137] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.645 [2024-11-25 13:24:51.138148] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.645 [2024-11-25 13:24:51.138157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.645 [2024-11-25 13:24:51.147760] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.645 [2024-11-25 13:24:51.147781] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.645 [2024-11-25 13:24:51.147790] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.147797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.645 [2024-11-25 13:24:51.147821] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.147957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.645 [2024-11-25 13:24:51.147986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504650 with addr=10.0.0.2, port=4420 00:25:53.645 [2024-11-25 13:24:51.148002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504650 is same with the state(6) to be set 00:25:53.645 [2024-11-25 13:24:51.148024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504650 (9): Bad file descriptor 00:25:53.645 [2024-11-25 13:24:51.148046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.645 [2024-11-25 13:24:51.148060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.645 [2024-11-25 13:24:51.148073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.645 [2024-11-25 13:24:51.148085] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.645 [2024-11-25 13:24:51.148094] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.645 [2024-11-25 13:24:51.148102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.645 [2024-11-25 13:24:51.157870] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.645 [2024-11-25 13:24:51.157906] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.645 [2024-11-25 13:24:51.157917] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.157925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.645 [2024-11-25 13:24:51.157950] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.158121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.645 [2024-11-25 13:24:51.158150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504650 with addr=10.0.0.2, port=4420 00:25:53.645 [2024-11-25 13:24:51.158167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504650 is same with the state(6) to be set 00:25:53.645 [2024-11-25 13:24:51.158190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504650 (9): Bad file descriptor 00:25:53.645 [2024-11-25 13:24:51.158224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.645 [2024-11-25 13:24:51.158242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.645 [2024-11-25 13:24:51.158257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.645 [2024-11-25 13:24:51.158269] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.645 [2024-11-25 13:24:51.158278] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.645 [2024-11-25 13:24:51.158286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.645 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.645 [2024-11-25 13:24:51.167984] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.645 [2024-11-25 13:24:51.168007] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.645 [2024-11-25 13:24:51.168017] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.168024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.645 [2024-11-25 13:24:51.168049] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.168231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.645 [2024-11-25 13:24:51.168262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504650 with addr=10.0.0.2, port=4420 00:25:53.645 [2024-11-25 13:24:51.168278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504650 is same with the state(6) to be set 00:25:53.645 [2024-11-25 13:24:51.168350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504650 (9): Bad file descriptor 00:25:53.645 [2024-11-25 13:24:51.168374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.645 [2024-11-25 13:24:51.168389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.645 [2024-11-25 13:24:51.168403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.645 [2024-11-25 13:24:51.168415] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.645 [2024-11-25 13:24:51.168424] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.645 [2024-11-25 13:24:51.168432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.645 [2024-11-25 13:24:51.178083] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.645 [2024-11-25 13:24:51.178105] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.645 [2024-11-25 13:24:51.178115] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.178122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.645 [2024-11-25 13:24:51.178147] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.178340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.645 [2024-11-25 13:24:51.178370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504650 with addr=10.0.0.2, port=4420 00:25:53.645 [2024-11-25 13:24:51.178387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504650 is same with the state(6) to be set 00:25:53.645 [2024-11-25 13:24:51.178422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504650 (9): Bad file descriptor 00:25:53.645 [2024-11-25 13:24:51.178461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.645 [2024-11-25 13:24:51.178479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.645 [2024-11-25 13:24:51.178493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.645 [2024-11-25 13:24:51.178505] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.645 [2024-11-25 13:24:51.178514] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.645 [2024-11-25 13:24:51.178522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.645 [2024-11-25 13:24:51.188182] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.645 [2024-11-25 13:24:51.188204] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.645 [2024-11-25 13:24:51.188214] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.188222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.645 [2024-11-25 13:24:51.188255] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.645 [2024-11-25 13:24:51.188394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.645 [2024-11-25 13:24:51.188422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504650 with addr=10.0.0.2, port=4420 00:25:53.646 [2024-11-25 13:24:51.188438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504650 is same with the state(6) to be set 00:25:53.646 [2024-11-25 13:24:51.188461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504650 (9): Bad file descriptor 00:25:53.646 [2024-11-25 13:24:51.188482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.646 [2024-11-25 13:24:51.188495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.646 [2024-11-25 13:24:51.188509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.646 [2024-11-25 13:24:51.188521] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.646 [2024-11-25 13:24:51.188529] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.646 [2024-11-25 13:24:51.188537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.646 [2024-11-25 13:24:51.198289] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.646 [2024-11-25 13:24:51.198333] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.646 [2024-11-25 13:24:51.198345] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.646 [2024-11-25 13:24:51.198352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:53.646 [2024-11-25 13:24:51.198377] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.646 [2024-11-25 13:24:51.198479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:53.646 [2024-11-25 13:24:51.198507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1504650 with addr=10.0.0.2, port=4420 00:25:53.646 [2024-11-25 13:24:51.198524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1504650 is same with the state(6) to be set 00:25:53.646 [2024-11-25 13:24:51.198547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1504650 (9): Bad file descriptor 00:25:53.646 [2024-11-25 13:24:51.198579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.646 [2024-11-25 13:24:51.198597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.646 [2024-11-25 13:24:51.198636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.646 [2024-11-25 13:24:51.198649] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.646 [2024-11-25 13:24:51.198663] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.646 [2024-11-25 13:24:51.198671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:53.646 [2024-11-25 13:24:51.200276] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:53.646 [2024-11-25 13:24:51.200342] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.646 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:53.904 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.905 13:24:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.838 [2024-11-25 13:24:52.474909] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.838 [2024-11-25 13:24:52.474942] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.838 [2024-11-25 13:24:52.474965] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.096 [2024-11-25 13:24:52.561244] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:55.354 [2024-11-25 13:24:52.868820] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:55.354 [2024-11-25 13:24:52.869659] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x151b850:1 started. 00:25:55.354 [2024-11-25 13:24:52.871835] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.354 [2024-11-25 13:24:52.871879] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.354 [2024-11-25 13:24:52.873764] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x151b850 was disconnected and freed. delete nvme_qpair. 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.354 request: 00:25:55.354 { 00:25:55.354 "name": "nvme", 00:25:55.354 "trtype": "tcp", 00:25:55.354 "traddr": "10.0.0.2", 00:25:55.354 "adrfam": "ipv4", 00:25:55.354 "trsvcid": "8009", 00:25:55.354 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:55.354 "wait_for_attach": true, 00:25:55.354 "method": "bdev_nvme_start_discovery", 00:25:55.354 "req_id": 1 00:25:55.354 } 00:25:55.354 Got JSON-RPC error response 00:25:55.354 response: 00:25:55.354 { 00:25:55.354 "code": -17, 00:25:55.354 "message": "File exists" 00:25:55.354 } 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.354 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.354 request: 00:25:55.354 { 00:25:55.354 "name": "nvme_second", 00:25:55.354 "trtype": "tcp", 00:25:55.354 "traddr": "10.0.0.2", 00:25:55.354 "adrfam": "ipv4", 00:25:55.354 "trsvcid": "8009", 00:25:55.354 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:55.354 "wait_for_attach": true, 00:25:55.354 "method": "bdev_nvme_start_discovery", 00:25:55.354 "req_id": 1 00:25:55.354 } 00:25:55.354 Got JSON-RPC error response 00:25:55.354 response: 00:25:55.354 { 00:25:55.354 "code": -17, 00:25:55.355 "message": "File exists" 00:25:55.355 } 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.355 13:24:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:55.355 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.612 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:55.612 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:55.612 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.612 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.612 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.612 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.612 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.613 13:24:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.544 [2024-11-25 13:24:54.091346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.544 [2024-11-25 13:24:54.091414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166c2f0 with addr=10.0.0.2, port=8010 00:25:56.544 [2024-11-25 13:24:54.091446] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:56.544 [2024-11-25 13:24:54.091460] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:56.544 [2024-11-25 13:24:54.091483] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:57.475 [2024-11-25 13:24:55.093787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.475 [2024-11-25 13:24:55.093850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x166c2f0 with addr=10.0.0.2, port=8010 00:25:57.475 [2024-11-25 13:24:55.093880] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:57.475 [2024-11-25 13:24:55.093894] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:57.475 [2024-11-25 13:24:55.093907] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:58.846 [2024-11-25 13:24:56.095978] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:58.846 request: 00:25:58.846 { 00:25:58.846 "name": "nvme_second", 00:25:58.846 "trtype": "tcp", 00:25:58.846 "traddr": "10.0.0.2", 00:25:58.846 "adrfam": "ipv4", 00:25:58.846 "trsvcid": "8010", 00:25:58.846 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:58.846 "wait_for_attach": false, 00:25:58.846 "attach_timeout_ms": 3000, 00:25:58.846 "method": "bdev_nvme_start_discovery", 00:25:58.846 "req_id": 1 00:25:58.846 } 00:25:58.846 Got JSON-RPC error response 00:25:58.846 response: 00:25:58.846 { 00:25:58.846 "code": -110, 00:25:58.846 "message": "Connection timed out" 00:25:58.846 } 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3249717 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:58.846 rmmod nvme_tcp 00:25:58.846 rmmod nvme_fabrics 00:25:58.846 rmmod nvme_keyring 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3249686 ']' 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3249686 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3249686 ']' 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3249686 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3249686 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3249686' 00:25:58.846 killing process with pid 3249686 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3249686 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3249686 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.846 13:24:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:01.379 00:26:01.379 real 0m13.735s 00:26:01.379 user 0m19.788s 00:26:01.379 sys 0m3.004s 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.379 ************************************ 00:26:01.379 END TEST nvmf_host_discovery 00:26:01.379 ************************************ 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.379 ************************************ 00:26:01.379 START TEST nvmf_host_multipath_status 00:26:01.379 ************************************ 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:01.379 * Looking for test storage... 00:26:01.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.379 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.380 --rc genhtml_branch_coverage=1 00:26:01.380 --rc genhtml_function_coverage=1 00:26:01.380 --rc genhtml_legend=1 00:26:01.380 --rc geninfo_all_blocks=1 00:26:01.380 --rc geninfo_unexecuted_blocks=1 00:26:01.380 00:26:01.380 ' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.380 --rc genhtml_branch_coverage=1 00:26:01.380 --rc genhtml_function_coverage=1 00:26:01.380 --rc genhtml_legend=1 00:26:01.380 --rc geninfo_all_blocks=1 00:26:01.380 --rc geninfo_unexecuted_blocks=1 00:26:01.380 00:26:01.380 ' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.380 --rc genhtml_branch_coverage=1 00:26:01.380 --rc genhtml_function_coverage=1 00:26:01.380 --rc genhtml_legend=1 00:26:01.380 --rc geninfo_all_blocks=1 00:26:01.380 --rc geninfo_unexecuted_blocks=1 00:26:01.380 00:26:01.380 ' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.380 --rc genhtml_branch_coverage=1 00:26:01.380 --rc genhtml_function_coverage=1 00:26:01.380 --rc genhtml_legend=1 00:26:01.380 --rc geninfo_all_blocks=1 00:26:01.380 --rc geninfo_unexecuted_blocks=1 00:26:01.380 00:26:01.380 ' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.380 13:24:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:03.281 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:03.281 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:03.281 Found net devices under 0000:09:00.0: cvl_0_0 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:03.281 Found net devices under 0000:09:00.1: cvl_0_1 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.281 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:03.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:26:03.282 00:26:03.282 --- 10.0.0.2 ping statistics --- 00:26:03.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.282 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:26:03.282 00:26:03.282 --- 10.0.0.1 ping statistics --- 00:26:03.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.282 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3252864 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3252864 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3252864 ']' 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.282 13:25:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.539 [2024-11-25 13:25:00.944512] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:26:03.539 [2024-11-25 13:25:00.944609] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.539 [2024-11-25 13:25:01.015710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:03.539 [2024-11-25 13:25:01.074922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.539 [2024-11-25 13:25:01.074981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.539 [2024-11-25 13:25:01.074995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.539 [2024-11-25 13:25:01.075006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.539 [2024-11-25 13:25:01.075015] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.539 [2024-11-25 13:25:01.076490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.539 [2024-11-25 13:25:01.076496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.796 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.796 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:03.796 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.796 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.796 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.796 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.796 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3252864 00:26:03.796 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:04.054 [2024-11-25 13:25:01.529481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.054 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:04.312 Malloc0 00:26:04.312 13:25:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:04.569 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.827 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.084 [2024-11-25 13:25:02.678507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.084 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:05.342 [2024-11-25 13:25:02.943226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3253157 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3253157 /var/tmp/bdevperf.sock 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3253157 ']' 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.342 13:25:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:05.600 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.600 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:05.600 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:05.858 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:06.423 Nvme0n1 00:26:06.423 13:25:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:06.988 Nvme0n1 00:26:06.988 13:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:06.988 13:25:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:08.962 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:08.962 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:09.219 13:25:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.478 13:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:10.852 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:10.852 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.852 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.852 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.852 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.852 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.852 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.852 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.110 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.110 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.110 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.110 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.369 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.369 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.369 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.369 13:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.627 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.627 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.627 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.627 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.885 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.885 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.885 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.885 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.143 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.143 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:12.143 13:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.401 13:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:12.972 13:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:13.906 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:13.906 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:13.906 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.906 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.164 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.164 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.164 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.164 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.423 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.423 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.423 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.423 13:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.681 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.681 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.681 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.681 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.939 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.939 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.939 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.939 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.198 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.198 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:15.198 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.198 13:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.456 13:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.456 13:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:15.456 13:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.715 13:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:15.973 13:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:16.907 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:16.907 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:16.907 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.907 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.474 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.474 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:17.474 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.474 13:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.474 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.474 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.474 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.474 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.040 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.040 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.040 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.040 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.040 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.040 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.040 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.040 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.298 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.298 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.298 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.298 13:25:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.863 13:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.863 13:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:18.864 13:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:18.864 13:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:19.121 13:25:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:20.496 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:20.496 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.496 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.496 13:25:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.496 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.496 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:20.496 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.496 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.755 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.755 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.755 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.755 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.013 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.013 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.013 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.013 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.271 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.271 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.271 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.271 13:25:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.529 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.529 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:21.529 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.529 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.786 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.786 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:21.787 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:22.044 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:22.303 13:25:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:23.677 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:23.677 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:23.677 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.677 13:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.677 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.677 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:23.677 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.677 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.934 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.934 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.934 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.934 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:24.191 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.191 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:24.191 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.191 13:25:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:24.450 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.450 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:24.450 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.450 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.708 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.708 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:24.708 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.709 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.967 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.967 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:24.967 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:25.224 13:25:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:25.790 13:25:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:26.724 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:26.724 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:26.724 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.724 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:26.982 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.982 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:26.982 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.982 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.240 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.240 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.240 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.240 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.499 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.499 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.499 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.499 13:25:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:27.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.757 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.015 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.015 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:28.015 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.015 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.274 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.274 13:25:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:28.531 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:28.531 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:28.789 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:29.047 13:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:30.422 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:30.422 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:30.422 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.422 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.422 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.422 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:30.422 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.422 13:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.681 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.681 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.681 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.681 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.940 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.940 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.940 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.940 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.198 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.198 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:31.198 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.198 13:25:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.457 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.457 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:31.457 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.457 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.714 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.714 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:31.714 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:31.972 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:32.231 13:25:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:33.631 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:33.631 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:33.631 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.631 13:25:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.631 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.631 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:33.631 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.631 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.889 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.889 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.889 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.889 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.147 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.147 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.147 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.147 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.405 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.405 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.405 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.405 13:25:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.662 13:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.662 13:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.662 13:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.662 13:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:34.920 13:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.920 13:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:34.920 13:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:35.178 13:25:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:35.437 13:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:36.810 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:36.810 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:36.810 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.810 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:36.810 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.810 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:36.810 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.810 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.068 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.068 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.068 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.068 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.326 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.326 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.326 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.326 13:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.585 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.585 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.585 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.585 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.843 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.843 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.843 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.843 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.101 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.101 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:38.101 13:25:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.667 13:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:38.925 13:25:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:39.859 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:39.859 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:39.859 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.859 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.117 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.117 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:40.117 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.117 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.375 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.375 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.375 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.375 13:25:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.633 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.633 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.633 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.633 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.890 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.890 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.890 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.890 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:41.148 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.148 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:41.148 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.148 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.406 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.406 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3253157 00:26:41.406 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3253157 ']' 00:26:41.406 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3253157 00:26:41.406 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:41.406 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.406 13:25:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3253157 00:26:41.406 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:41.406 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:41.406 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3253157' 00:26:41.406 killing process with pid 3253157 00:26:41.406 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3253157 00:26:41.406 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3253157 00:26:41.667 { 00:26:41.667 "results": [ 00:26:41.667 { 00:26:41.667 "job": "Nvme0n1", 00:26:41.667 "core_mask": "0x4", 00:26:41.667 "workload": "verify", 00:26:41.667 "status": "terminated", 00:26:41.667 "verify_range": { 00:26:41.667 "start": 0, 00:26:41.667 "length": 16384 00:26:41.667 }, 00:26:41.667 "queue_depth": 128, 00:26:41.667 "io_size": 4096, 00:26:41.667 "runtime": 34.402455, 00:26:41.667 "iops": 7918.4174501499965, 00:26:41.667 "mibps": 30.931318164648424, 00:26:41.667 "io_failed": 0, 00:26:41.667 "io_timeout": 0, 00:26:41.667 "avg_latency_us": 16119.104059073701, 00:26:41.667 "min_latency_us": 482.41777777777776, 00:26:41.667 "max_latency_us": 4026531.84 00:26:41.667 } 00:26:41.667 ], 00:26:41.667 "core_count": 1 00:26:41.667 } 00:26:41.667 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3253157 00:26:41.667 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:41.667 [2024-11-25 13:25:03.009846] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:26:41.667 [2024-11-25 13:25:03.009950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253157 ] 00:26:41.667 [2024-11-25 13:25:03.076750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.667 [2024-11-25 13:25:03.134871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.668 Running I/O for 90 seconds... 00:26:41.668 8448.00 IOPS, 33.00 MiB/s [2024-11-25T12:25:39.327Z] 8411.50 IOPS, 32.86 MiB/s [2024-11-25T12:25:39.327Z] 8408.00 IOPS, 32.84 MiB/s [2024-11-25T12:25:39.327Z] 8407.25 IOPS, 32.84 MiB/s [2024-11-25T12:25:39.327Z] 8373.40 IOPS, 32.71 MiB/s [2024-11-25T12:25:39.327Z] 8379.17 IOPS, 32.73 MiB/s [2024-11-25T12:25:39.327Z] 8407.29 IOPS, 32.84 MiB/s [2024-11-25T12:25:39.327Z] 8434.12 IOPS, 32.95 MiB/s [2024-11-25T12:25:39.327Z] 8459.89 IOPS, 33.05 MiB/s [2024-11-25T12:25:39.327Z] 8455.80 IOPS, 33.03 MiB/s [2024-11-25T12:25:39.327Z] 8447.55 IOPS, 33.00 MiB/s [2024-11-25T12:25:39.327Z] 8440.67 IOPS, 32.97 MiB/s [2024-11-25T12:25:39.327Z] 8436.00 IOPS, 32.95 MiB/s [2024-11-25T12:25:39.327Z] 8429.79 IOPS, 32.93 MiB/s [2024-11-25T12:25:39.327Z] 8422.47 IOPS, 32.90 MiB/s [2024-11-25T12:25:39.327Z] [2024-11-25 13:25:19.668869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.668 [2024-11-25 13:25:19.668922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.668984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.669976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.669991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.670011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.670042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.670065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.670085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.668 [2024-11-25 13:25:19.670120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.668 [2024-11-25 13:25:19.670143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.670960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.670999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.671954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.671997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.672013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.672052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.672068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.672091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.672111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.672140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.672157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.669 [2024-11-25 13:25:19.672179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.669 [2024-11-25 13:25:19.672194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.672976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.672991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.670 [2024-11-25 13:25:19.673453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.670 [2024-11-25 13:25:19.673707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.673738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.673773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.673792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.673820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.673837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.673865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.673882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.673910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.673927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.673955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.673987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.674966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.674987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.675015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.675031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:19.675058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:19.675075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:41.671 7920.00 IOPS, 30.94 MiB/s [2024-11-25T12:25:39.330Z] 7454.12 IOPS, 29.12 MiB/s [2024-11-25T12:25:39.330Z] 7040.00 IOPS, 27.50 MiB/s [2024-11-25T12:25:39.330Z] 6669.47 IOPS, 26.05 MiB/s [2024-11-25T12:25:39.330Z] 6735.95 IOPS, 26.31 MiB/s [2024-11-25T12:25:39.330Z] 6819.33 IOPS, 26.64 MiB/s [2024-11-25T12:25:39.330Z] 6916.59 IOPS, 27.02 MiB/s [2024-11-25T12:25:39.330Z] 7103.43 IOPS, 27.75 MiB/s [2024-11-25T12:25:39.330Z] 7272.04 IOPS, 28.41 MiB/s [2024-11-25T12:25:39.330Z] 7431.60 IOPS, 29.03 MiB/s [2024-11-25T12:25:39.330Z] 7468.92 IOPS, 29.18 MiB/s [2024-11-25T12:25:39.330Z] 7500.41 IOPS, 29.30 MiB/s [2024-11-25T12:25:39.330Z] 7533.86 IOPS, 29.43 MiB/s [2024-11-25T12:25:39.330Z] 7610.41 IOPS, 29.73 MiB/s [2024-11-25T12:25:39.330Z] 7710.80 IOPS, 30.12 MiB/s [2024-11-25T12:25:39.330Z] 7825.13 IOPS, 30.57 MiB/s [2024-11-25T12:25:39.330Z] [2024-11-25 13:25:36.316376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:36.316441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:36.316520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:36.316542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:36.316588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:36.316621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:36.316644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.671 [2024-11-25 13:25:36.316660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:36.316697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.671 [2024-11-25 13:25:36.316717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:36.319172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.671 [2024-11-25 13:25:36.319201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:41.671 [2024-11-25 13:25:36.319229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.319965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.319987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.672 [2024-11-25 13:25:36.320257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:41.672 [2024-11-25 13:25:36.320520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.672 [2024-11-25 13:25:36.320536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.673 [2024-11-25 13:25:36.320574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.673 [2024-11-25 13:25:36.320618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.673 [2024-11-25 13:25:36.320672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.673 [2024-11-25 13:25:36.320709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.673 [2024-11-25 13:25:36.320746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.673 [2024-11-25 13:25:36.320784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.673 [2024-11-25 13:25:36.320821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.673 [2024-11-25 13:25:36.320858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.673 [2024-11-25 13:25:36.320880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.673 [2024-11-25 13:25:36.320896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.673 7900.12 IOPS, 30.86 MiB/s [2024-11-25T12:25:39.332Z] 7914.79 IOPS, 30.92 MiB/s [2024-11-25T12:25:39.332Z] 7925.09 IOPS, 30.96 MiB/s [2024-11-25T12:25:39.332Z] Received shutdown signal, test time was about 34.403274 seconds 00:26:41.673 00:26:41.673 Latency(us) 00:26:41.673 [2024-11-25T12:25:39.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.673 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:41.673 Verification LBA range: start 0x0 length 0x4000 00:26:41.673 Nvme0n1 : 34.40 7918.42 30.93 0.00 0.00 16119.10 482.42 4026531.84 00:26:41.673 [2024-11-25T12:25:39.332Z] =================================================================================================================== 00:26:41.673 [2024-11-25T12:25:39.332Z] Total : 7918.42 30.93 0.00 0.00 16119.10 482.42 4026531.84 00:26:41.673 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:41.930 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:41.930 rmmod nvme_tcp 00:26:42.188 rmmod nvme_fabrics 00:26:42.188 rmmod nvme_keyring 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3252864 ']' 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3252864 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3252864 ']' 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3252864 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3252864 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3252864' 00:26:42.188 killing process with pid 3252864 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3252864 00:26:42.188 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3252864 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.448 13:25:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.354 13:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.354 00:26:44.354 real 0m43.356s 00:26:44.354 user 2m12.389s 00:26:44.354 sys 0m10.759s 00:26:44.354 13:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.354 13:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:44.354 ************************************ 00:26:44.354 END TEST nvmf_host_multipath_status 00:26:44.354 ************************************ 00:26:44.354 13:25:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:44.354 13:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:44.354 13:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.354 13:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.354 ************************************ 00:26:44.354 START TEST nvmf_discovery_remove_ifc 00:26:44.354 ************************************ 00:26:44.354 13:25:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:44.614 * Looking for test storage... 00:26:44.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.614 --rc genhtml_branch_coverage=1 00:26:44.614 --rc genhtml_function_coverage=1 00:26:44.614 --rc genhtml_legend=1 00:26:44.614 --rc geninfo_all_blocks=1 00:26:44.614 --rc geninfo_unexecuted_blocks=1 00:26:44.614 00:26:44.614 ' 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.614 --rc genhtml_branch_coverage=1 00:26:44.614 --rc genhtml_function_coverage=1 00:26:44.614 --rc genhtml_legend=1 00:26:44.614 --rc geninfo_all_blocks=1 00:26:44.614 --rc geninfo_unexecuted_blocks=1 00:26:44.614 00:26:44.614 ' 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.614 --rc genhtml_branch_coverage=1 00:26:44.614 --rc genhtml_function_coverage=1 00:26:44.614 --rc genhtml_legend=1 00:26:44.614 --rc geninfo_all_blocks=1 00:26:44.614 --rc geninfo_unexecuted_blocks=1 00:26:44.614 00:26:44.614 ' 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:44.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.614 --rc genhtml_branch_coverage=1 00:26:44.614 --rc genhtml_function_coverage=1 00:26:44.614 --rc genhtml_legend=1 00:26:44.614 --rc geninfo_all_blocks=1 00:26:44.614 --rc geninfo_unexecuted_blocks=1 00:26:44.614 00:26:44.614 ' 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.614 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:44.615 13:25:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.148 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:47.148 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:47.149 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:47.149 Found net devices under 0000:09:00.0: cvl_0_0 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:47.149 Found net devices under 0000:09:00.1: cvl_0_1 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:47.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:26:47.149 00:26:47.149 --- 10.0.0.2 ping statistics --- 00:26:47.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.149 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:26:47.149 00:26:47.149 --- 10.0.0.1 ping statistics --- 00:26:47.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.149 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3259624 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3259624 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3259624 ']' 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.149 [2024-11-25 13:25:44.460016] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:26:47.149 [2024-11-25 13:25:44.460108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.149 [2024-11-25 13:25:44.532875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.149 [2024-11-25 13:25:44.586267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.149 [2024-11-25 13:25:44.586346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.149 [2024-11-25 13:25:44.586367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.149 [2024-11-25 13:25:44.586377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.149 [2024-11-25 13:25:44.586386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.149 [2024-11-25 13:25:44.586945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.149 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.150 [2024-11-25 13:25:44.733744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.150 [2024-11-25 13:25:44.741893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:47.150 null0 00:26:47.150 [2024-11-25 13:25:44.773839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3259650 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3259650 /tmp/host.sock 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3259650 ']' 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:47.150 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.150 13:25:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.407 [2024-11-25 13:25:44.840858] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:26:47.407 [2024-11-25 13:25:44.840922] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259650 ] 00:26:47.407 [2024-11-25 13:25:44.906457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.407 [2024-11-25 13:25:44.964119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.666 13:25:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.598 [2024-11-25 13:25:46.230980] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:48.598 [2024-11-25 13:25:46.231010] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:48.598 [2024-11-25 13:25:46.231031] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:48.856 [2024-11-25 13:25:46.318323] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:48.856 [2024-11-25 13:25:46.420180] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:48.856 [2024-11-25 13:25:46.421222] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x19c0070:1 started. 00:26:48.856 [2024-11-25 13:25:46.422900] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:48.856 [2024-11-25 13:25:46.422951] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:48.856 [2024-11-25 13:25:46.422986] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:48.856 [2024-11-25 13:25:46.423010] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:48.856 [2024-11-25 13:25:46.423041] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.856 [2024-11-25 13:25:46.429837] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x19c0070 was disconnected and freed. delete nvme_qpair. 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:48.856 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:49.114 13:25:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:50.047 13:25:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.981 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.981 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.981 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.981 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.981 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.981 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.981 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.981 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.239 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:51.239 13:25:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:52.172 13:25:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:53.106 13:25:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:54.480 13:25:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.480 [2024-11-25 13:25:51.864419] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:54.480 [2024-11-25 13:25:51.864477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.480 [2024-11-25 13:25:51.864497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.480 [2024-11-25 13:25:51.864512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.480 [2024-11-25 13:25:51.864525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.480 [2024-11-25 13:25:51.864538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.480 [2024-11-25 13:25:51.864551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.480 [2024-11-25 13:25:51.864563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.480 [2024-11-25 13:25:51.864575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.480 [2024-11-25 13:25:51.864603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.480 [2024-11-25 13:25:51.864614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.480 [2024-11-25 13:25:51.864626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199c910 is same with the state(6) to be set 00:26:54.480 [2024-11-25 13:25:51.874439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199c910 (9): Bad file descriptor 00:26:54.480 [2024-11-25 13:25:51.884478] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.480 [2024-11-25 13:25:51.884500] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.480 [2024-11-25 13:25:51.884511] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.480 [2024-11-25 13:25:51.884519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.480 [2024-11-25 13:25:51.884557] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:55.413 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:55.413 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.413 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.413 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.413 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:55.413 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:55.413 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:55.413 [2024-11-25 13:25:52.918329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:55.413 [2024-11-25 13:25:52.918381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199c910 with addr=10.0.0.2, port=4420 00:26:55.414 [2024-11-25 13:25:52.918400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199c910 is same with the state(6) to be set 00:26:55.414 [2024-11-25 13:25:52.918428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199c910 (9): Bad file descriptor 00:26:55.414 [2024-11-25 13:25:52.918804] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:55.414 [2024-11-25 13:25:52.918838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:55.414 [2024-11-25 13:25:52.918854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:55.414 [2024-11-25 13:25:52.918868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:55.414 [2024-11-25 13:25:52.918879] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:55.414 [2024-11-25 13:25:52.918889] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:55.414 [2024-11-25 13:25:52.918896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:55.414 [2024-11-25 13:25:52.918909] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:55.414 [2024-11-25 13:25:52.918917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:55.414 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.414 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:55.414 13:25:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:56.349 [2024-11-25 13:25:53.921401] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:56.349 [2024-11-25 13:25:53.921428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:56.349 [2024-11-25 13:25:53.921446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:56.349 [2024-11-25 13:25:53.921459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:56.349 [2024-11-25 13:25:53.921471] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:56.349 [2024-11-25 13:25:53.921483] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:56.349 [2024-11-25 13:25:53.921492] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:56.349 [2024-11-25 13:25:53.921499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:56.349 [2024-11-25 13:25:53.921545] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:56.349 [2024-11-25 13:25:53.921594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.349 [2024-11-25 13:25:53.921613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.349 [2024-11-25 13:25:53.921628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.349 [2024-11-25 13:25:53.921642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.349 [2024-11-25 13:25:53.921655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.349 [2024-11-25 13:25:53.921668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.349 [2024-11-25 13:25:53.921680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.349 [2024-11-25 13:25:53.921692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.349 [2024-11-25 13:25:53.921707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.350 [2024-11-25 13:25:53.921719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.350 [2024-11-25 13:25:53.921732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:56.350 [2024-11-25 13:25:53.921985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198bc40 (9): Bad file descriptor 00:26:56.350 [2024-11-25 13:25:53.923004] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:56.350 [2024-11-25 13:25:53.923025] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.350 13:25:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:56.608 13:25:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.541 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.541 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.541 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.542 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.542 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.542 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.542 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.542 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.542 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:57.542 13:25:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.476 [2024-11-25 13:25:55.977504] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:58.476 [2024-11-25 13:25:55.977526] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:58.476 [2024-11-25 13:25:55.977549] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:58.476 [2024-11-25 13:25:56.063856] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:58.476 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.476 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.476 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.476 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.476 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.476 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.476 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.476 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.734 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:58.734 13:25:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.734 [2024-11-25 13:25:56.286132] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:58.734 [2024-11-25 13:25:56.286899] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x19a7260:1 started. 00:26:58.734 [2024-11-25 13:25:56.288280] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:58.734 [2024-11-25 13:25:56.288344] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:58.734 [2024-11-25 13:25:56.288373] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:58.734 [2024-11-25 13:25:56.288401] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:58.734 [2024-11-25 13:25:56.288414] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:58.735 [2024-11-25 13:25:56.295907] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x19a7260 was disconnected and freed. delete nvme_qpair. 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3259650 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3259650 ']' 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3259650 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259650 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259650' 00:26:59.669 killing process with pid 3259650 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3259650 00:26:59.669 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3259650 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:59.927 rmmod nvme_tcp 00:26:59.927 rmmod nvme_fabrics 00:26:59.927 rmmod nvme_keyring 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3259624 ']' 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3259624 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3259624 ']' 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3259624 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259624 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259624' 00:26:59.927 killing process with pid 3259624 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3259624 00:26:59.927 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3259624 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.233 13:25:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.167 13:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:02.167 00:27:02.167 real 0m17.830s 00:27:02.167 user 0m25.839s 00:27:02.167 sys 0m3.059s 00:27:02.167 13:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.167 13:25:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.167 ************************************ 00:27:02.167 END TEST nvmf_discovery_remove_ifc 00:27:02.167 ************************************ 00:27:02.425 13:25:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:02.425 13:25:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:02.425 13:25:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.425 13:25:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.425 ************************************ 00:27:02.425 START TEST nvmf_identify_kernel_target 00:27:02.425 ************************************ 00:27:02.425 13:25:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:02.425 * Looking for test storage... 00:27:02.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.425 13:25:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:02.425 13:25:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:02.425 13:25:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:02.425 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:02.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.426 --rc genhtml_branch_coverage=1 00:27:02.426 --rc genhtml_function_coverage=1 00:27:02.426 --rc genhtml_legend=1 00:27:02.426 --rc geninfo_all_blocks=1 00:27:02.426 --rc geninfo_unexecuted_blocks=1 00:27:02.426 00:27:02.426 ' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:02.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.426 --rc genhtml_branch_coverage=1 00:27:02.426 --rc genhtml_function_coverage=1 00:27:02.426 --rc genhtml_legend=1 00:27:02.426 --rc geninfo_all_blocks=1 00:27:02.426 --rc geninfo_unexecuted_blocks=1 00:27:02.426 00:27:02.426 ' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:02.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.426 --rc genhtml_branch_coverage=1 00:27:02.426 --rc genhtml_function_coverage=1 00:27:02.426 --rc genhtml_legend=1 00:27:02.426 --rc geninfo_all_blocks=1 00:27:02.426 --rc geninfo_unexecuted_blocks=1 00:27:02.426 00:27:02.426 ' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:02.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.426 --rc genhtml_branch_coverage=1 00:27:02.426 --rc genhtml_function_coverage=1 00:27:02.426 --rc genhtml_legend=1 00:27:02.426 --rc geninfo_all_blocks=1 00:27:02.426 --rc geninfo_unexecuted_blocks=1 00:27:02.426 00:27:02.426 ' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:02.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:02.426 13:26:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.960 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:04.961 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:04.961 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:04.961 Found net devices under 0000:09:00.0: cvl_0_0 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:04.961 Found net devices under 0000:09:00.1: cvl_0_1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:27:04.961 00:27:04.961 --- 10.0.0.2 ping statistics --- 00:27:04.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.961 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:27:04.961 00:27:04.961 --- 10.0.0.1 ping statistics --- 00:27:04.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.961 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:04.961 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:04.962 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:04.962 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:04.962 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:04.962 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:04.962 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:04.962 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:04.962 13:26:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:05.898 Waiting for block devices as requested 00:27:05.898 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:06.155 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:06.155 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:06.155 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:06.414 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:06.414 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:06.414 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:06.414 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:06.674 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:06.674 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:06.933 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:06.933 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:06.933 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:06.933 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:06.933 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:07.190 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:07.190 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:07.190 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:07.447 No valid GPT data, bailing 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:07.447 13:26:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:07.447 00:27:07.447 Discovery Log Number of Records 2, Generation counter 2 00:27:07.447 =====Discovery Log Entry 0====== 00:27:07.447 trtype: tcp 00:27:07.447 adrfam: ipv4 00:27:07.447 subtype: current discovery subsystem 00:27:07.447 treq: not specified, sq flow control disable supported 00:27:07.447 portid: 1 00:27:07.447 trsvcid: 4420 00:27:07.447 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:07.447 traddr: 10.0.0.1 00:27:07.447 eflags: none 00:27:07.447 sectype: none 00:27:07.447 =====Discovery Log Entry 1====== 00:27:07.447 trtype: tcp 00:27:07.447 adrfam: ipv4 00:27:07.447 subtype: nvme subsystem 00:27:07.447 treq: not specified, sq flow control disable supported 00:27:07.447 portid: 1 00:27:07.447 trsvcid: 4420 00:27:07.447 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:07.447 traddr: 10.0.0.1 00:27:07.447 eflags: none 00:27:07.447 sectype: none 00:27:07.447 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:07.447 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:07.707 ===================================================== 00:27:07.707 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:07.707 ===================================================== 00:27:07.707 Controller Capabilities/Features 00:27:07.707 ================================ 00:27:07.707 Vendor ID: 0000 00:27:07.707 Subsystem Vendor ID: 0000 00:27:07.707 Serial Number: 79d2e9975b7672f1fb5e 00:27:07.707 Model Number: Linux 00:27:07.707 Firmware Version: 6.8.9-20 00:27:07.707 Recommended Arb Burst: 0 00:27:07.707 IEEE OUI Identifier: 00 00 00 00:27:07.707 Multi-path I/O 00:27:07.707 May have multiple subsystem ports: No 00:27:07.707 May have multiple controllers: No 00:27:07.707 Associated with SR-IOV VF: No 00:27:07.707 Max Data Transfer Size: Unlimited 00:27:07.707 Max Number of Namespaces: 0 00:27:07.707 Max Number of I/O Queues: 1024 00:27:07.707 NVMe Specification Version (VS): 1.3 00:27:07.707 NVMe Specification Version (Identify): 1.3 00:27:07.707 Maximum Queue Entries: 1024 00:27:07.707 Contiguous Queues Required: No 00:27:07.707 Arbitration Mechanisms Supported 00:27:07.707 Weighted Round Robin: Not Supported 00:27:07.707 Vendor Specific: Not Supported 00:27:07.707 Reset Timeout: 7500 ms 00:27:07.707 Doorbell Stride: 4 bytes 00:27:07.707 NVM Subsystem Reset: Not Supported 00:27:07.707 Command Sets Supported 00:27:07.707 NVM Command Set: Supported 00:27:07.707 Boot Partition: Not Supported 00:27:07.707 Memory Page Size Minimum: 4096 bytes 00:27:07.707 Memory Page Size Maximum: 4096 bytes 00:27:07.707 Persistent Memory Region: Not Supported 00:27:07.707 Optional Asynchronous Events Supported 00:27:07.707 Namespace Attribute Notices: Not Supported 00:27:07.707 Firmware Activation Notices: Not Supported 00:27:07.707 ANA Change Notices: Not Supported 00:27:07.707 PLE Aggregate Log Change Notices: Not Supported 00:27:07.707 LBA Status Info Alert Notices: Not Supported 00:27:07.707 EGE Aggregate Log Change Notices: Not Supported 00:27:07.707 Normal NVM Subsystem Shutdown event: Not Supported 00:27:07.707 Zone Descriptor Change Notices: Not Supported 00:27:07.707 Discovery Log Change Notices: Supported 00:27:07.707 Controller Attributes 00:27:07.707 128-bit Host Identifier: Not Supported 00:27:07.707 Non-Operational Permissive Mode: Not Supported 00:27:07.707 NVM Sets: Not Supported 00:27:07.707 Read Recovery Levels: Not Supported 00:27:07.707 Endurance Groups: Not Supported 00:27:07.707 Predictable Latency Mode: Not Supported 00:27:07.707 Traffic Based Keep ALive: Not Supported 00:27:07.707 Namespace Granularity: Not Supported 00:27:07.707 SQ Associations: Not Supported 00:27:07.707 UUID List: Not Supported 00:27:07.707 Multi-Domain Subsystem: Not Supported 00:27:07.707 Fixed Capacity Management: Not Supported 00:27:07.707 Variable Capacity Management: Not Supported 00:27:07.707 Delete Endurance Group: Not Supported 00:27:07.707 Delete NVM Set: Not Supported 00:27:07.707 Extended LBA Formats Supported: Not Supported 00:27:07.707 Flexible Data Placement Supported: Not Supported 00:27:07.707 00:27:07.707 Controller Memory Buffer Support 00:27:07.707 ================================ 00:27:07.707 Supported: No 00:27:07.707 00:27:07.707 Persistent Memory Region Support 00:27:07.707 ================================ 00:27:07.707 Supported: No 00:27:07.707 00:27:07.707 Admin Command Set Attributes 00:27:07.707 ============================ 00:27:07.707 Security Send/Receive: Not Supported 00:27:07.707 Format NVM: Not Supported 00:27:07.707 Firmware Activate/Download: Not Supported 00:27:07.707 Namespace Management: Not Supported 00:27:07.707 Device Self-Test: Not Supported 00:27:07.707 Directives: Not Supported 00:27:07.707 NVMe-MI: Not Supported 00:27:07.707 Virtualization Management: Not Supported 00:27:07.707 Doorbell Buffer Config: Not Supported 00:27:07.707 Get LBA Status Capability: Not Supported 00:27:07.707 Command & Feature Lockdown Capability: Not Supported 00:27:07.707 Abort Command Limit: 1 00:27:07.707 Async Event Request Limit: 1 00:27:07.707 Number of Firmware Slots: N/A 00:27:07.707 Firmware Slot 1 Read-Only: N/A 00:27:07.707 Firmware Activation Without Reset: N/A 00:27:07.707 Multiple Update Detection Support: N/A 00:27:07.707 Firmware Update Granularity: No Information Provided 00:27:07.707 Per-Namespace SMART Log: No 00:27:07.707 Asymmetric Namespace Access Log Page: Not Supported 00:27:07.707 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:07.707 Command Effects Log Page: Not Supported 00:27:07.707 Get Log Page Extended Data: Supported 00:27:07.707 Telemetry Log Pages: Not Supported 00:27:07.707 Persistent Event Log Pages: Not Supported 00:27:07.707 Supported Log Pages Log Page: May Support 00:27:07.707 Commands Supported & Effects Log Page: Not Supported 00:27:07.707 Feature Identifiers & Effects Log Page:May Support 00:27:07.707 NVMe-MI Commands & Effects Log Page: May Support 00:27:07.707 Data Area 4 for Telemetry Log: Not Supported 00:27:07.707 Error Log Page Entries Supported: 1 00:27:07.707 Keep Alive: Not Supported 00:27:07.707 00:27:07.707 NVM Command Set Attributes 00:27:07.707 ========================== 00:27:07.707 Submission Queue Entry Size 00:27:07.707 Max: 1 00:27:07.707 Min: 1 00:27:07.707 Completion Queue Entry Size 00:27:07.707 Max: 1 00:27:07.707 Min: 1 00:27:07.707 Number of Namespaces: 0 00:27:07.707 Compare Command: Not Supported 00:27:07.707 Write Uncorrectable Command: Not Supported 00:27:07.707 Dataset Management Command: Not Supported 00:27:07.707 Write Zeroes Command: Not Supported 00:27:07.707 Set Features Save Field: Not Supported 00:27:07.707 Reservations: Not Supported 00:27:07.707 Timestamp: Not Supported 00:27:07.707 Copy: Not Supported 00:27:07.707 Volatile Write Cache: Not Present 00:27:07.707 Atomic Write Unit (Normal): 1 00:27:07.707 Atomic Write Unit (PFail): 1 00:27:07.707 Atomic Compare & Write Unit: 1 00:27:07.707 Fused Compare & Write: Not Supported 00:27:07.707 Scatter-Gather List 00:27:07.707 SGL Command Set: Supported 00:27:07.707 SGL Keyed: Not Supported 00:27:07.707 SGL Bit Bucket Descriptor: Not Supported 00:27:07.707 SGL Metadata Pointer: Not Supported 00:27:07.707 Oversized SGL: Not Supported 00:27:07.707 SGL Metadata Address: Not Supported 00:27:07.707 SGL Offset: Supported 00:27:07.707 Transport SGL Data Block: Not Supported 00:27:07.707 Replay Protected Memory Block: Not Supported 00:27:07.707 00:27:07.707 Firmware Slot Information 00:27:07.707 ========================= 00:27:07.707 Active slot: 0 00:27:07.707 00:27:07.707 00:27:07.707 Error Log 00:27:07.707 ========= 00:27:07.707 00:27:07.707 Active Namespaces 00:27:07.707 ================= 00:27:07.707 Discovery Log Page 00:27:07.707 ================== 00:27:07.707 Generation Counter: 2 00:27:07.707 Number of Records: 2 00:27:07.707 Record Format: 0 00:27:07.707 00:27:07.707 Discovery Log Entry 0 00:27:07.707 ---------------------- 00:27:07.707 Transport Type: 3 (TCP) 00:27:07.707 Address Family: 1 (IPv4) 00:27:07.707 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:07.707 Entry Flags: 00:27:07.707 Duplicate Returned Information: 0 00:27:07.707 Explicit Persistent Connection Support for Discovery: 0 00:27:07.707 Transport Requirements: 00:27:07.707 Secure Channel: Not Specified 00:27:07.707 Port ID: 1 (0x0001) 00:27:07.707 Controller ID: 65535 (0xffff) 00:27:07.707 Admin Max SQ Size: 32 00:27:07.707 Transport Service Identifier: 4420 00:27:07.708 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:07.708 Transport Address: 10.0.0.1 00:27:07.708 Discovery Log Entry 1 00:27:07.708 ---------------------- 00:27:07.708 Transport Type: 3 (TCP) 00:27:07.708 Address Family: 1 (IPv4) 00:27:07.708 Subsystem Type: 2 (NVM Subsystem) 00:27:07.708 Entry Flags: 00:27:07.708 Duplicate Returned Information: 0 00:27:07.708 Explicit Persistent Connection Support for Discovery: 0 00:27:07.708 Transport Requirements: 00:27:07.708 Secure Channel: Not Specified 00:27:07.708 Port ID: 1 (0x0001) 00:27:07.708 Controller ID: 65535 (0xffff) 00:27:07.708 Admin Max SQ Size: 32 00:27:07.708 Transport Service Identifier: 4420 00:27:07.708 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:07.708 Transport Address: 10.0.0.1 00:27:07.708 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:07.708 get_feature(0x01) failed 00:27:07.708 get_feature(0x02) failed 00:27:07.708 get_feature(0x04) failed 00:27:07.708 ===================================================== 00:27:07.708 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:07.708 ===================================================== 00:27:07.708 Controller Capabilities/Features 00:27:07.708 ================================ 00:27:07.708 Vendor ID: 0000 00:27:07.708 Subsystem Vendor ID: 0000 00:27:07.708 Serial Number: 2b1b8b60f8888c3988fc 00:27:07.708 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:07.708 Firmware Version: 6.8.9-20 00:27:07.708 Recommended Arb Burst: 6 00:27:07.708 IEEE OUI Identifier: 00 00 00 00:27:07.708 Multi-path I/O 00:27:07.708 May have multiple subsystem ports: Yes 00:27:07.708 May have multiple controllers: Yes 00:27:07.708 Associated with SR-IOV VF: No 00:27:07.708 Max Data Transfer Size: Unlimited 00:27:07.708 Max Number of Namespaces: 1024 00:27:07.708 Max Number of I/O Queues: 128 00:27:07.708 NVMe Specification Version (VS): 1.3 00:27:07.708 NVMe Specification Version (Identify): 1.3 00:27:07.708 Maximum Queue Entries: 1024 00:27:07.708 Contiguous Queues Required: No 00:27:07.708 Arbitration Mechanisms Supported 00:27:07.708 Weighted Round Robin: Not Supported 00:27:07.708 Vendor Specific: Not Supported 00:27:07.708 Reset Timeout: 7500 ms 00:27:07.708 Doorbell Stride: 4 bytes 00:27:07.708 NVM Subsystem Reset: Not Supported 00:27:07.708 Command Sets Supported 00:27:07.708 NVM Command Set: Supported 00:27:07.708 Boot Partition: Not Supported 00:27:07.708 Memory Page Size Minimum: 4096 bytes 00:27:07.708 Memory Page Size Maximum: 4096 bytes 00:27:07.708 Persistent Memory Region: Not Supported 00:27:07.708 Optional Asynchronous Events Supported 00:27:07.708 Namespace Attribute Notices: Supported 00:27:07.708 Firmware Activation Notices: Not Supported 00:27:07.708 ANA Change Notices: Supported 00:27:07.708 PLE Aggregate Log Change Notices: Not Supported 00:27:07.708 LBA Status Info Alert Notices: Not Supported 00:27:07.708 EGE Aggregate Log Change Notices: Not Supported 00:27:07.708 Normal NVM Subsystem Shutdown event: Not Supported 00:27:07.708 Zone Descriptor Change Notices: Not Supported 00:27:07.708 Discovery Log Change Notices: Not Supported 00:27:07.708 Controller Attributes 00:27:07.708 128-bit Host Identifier: Supported 00:27:07.708 Non-Operational Permissive Mode: Not Supported 00:27:07.708 NVM Sets: Not Supported 00:27:07.708 Read Recovery Levels: Not Supported 00:27:07.708 Endurance Groups: Not Supported 00:27:07.708 Predictable Latency Mode: Not Supported 00:27:07.708 Traffic Based Keep ALive: Supported 00:27:07.708 Namespace Granularity: Not Supported 00:27:07.708 SQ Associations: Not Supported 00:27:07.708 UUID List: Not Supported 00:27:07.708 Multi-Domain Subsystem: Not Supported 00:27:07.708 Fixed Capacity Management: Not Supported 00:27:07.708 Variable Capacity Management: Not Supported 00:27:07.708 Delete Endurance Group: Not Supported 00:27:07.708 Delete NVM Set: Not Supported 00:27:07.708 Extended LBA Formats Supported: Not Supported 00:27:07.708 Flexible Data Placement Supported: Not Supported 00:27:07.708 00:27:07.708 Controller Memory Buffer Support 00:27:07.708 ================================ 00:27:07.708 Supported: No 00:27:07.708 00:27:07.708 Persistent Memory Region Support 00:27:07.708 ================================ 00:27:07.708 Supported: No 00:27:07.708 00:27:07.708 Admin Command Set Attributes 00:27:07.708 ============================ 00:27:07.708 Security Send/Receive: Not Supported 00:27:07.708 Format NVM: Not Supported 00:27:07.708 Firmware Activate/Download: Not Supported 00:27:07.708 Namespace Management: Not Supported 00:27:07.708 Device Self-Test: Not Supported 00:27:07.708 Directives: Not Supported 00:27:07.708 NVMe-MI: Not Supported 00:27:07.708 Virtualization Management: Not Supported 00:27:07.708 Doorbell Buffer Config: Not Supported 00:27:07.708 Get LBA Status Capability: Not Supported 00:27:07.708 Command & Feature Lockdown Capability: Not Supported 00:27:07.708 Abort Command Limit: 4 00:27:07.708 Async Event Request Limit: 4 00:27:07.708 Number of Firmware Slots: N/A 00:27:07.708 Firmware Slot 1 Read-Only: N/A 00:27:07.708 Firmware Activation Without Reset: N/A 00:27:07.708 Multiple Update Detection Support: N/A 00:27:07.708 Firmware Update Granularity: No Information Provided 00:27:07.708 Per-Namespace SMART Log: Yes 00:27:07.708 Asymmetric Namespace Access Log Page: Supported 00:27:07.708 ANA Transition Time : 10 sec 00:27:07.708 00:27:07.708 Asymmetric Namespace Access Capabilities 00:27:07.708 ANA Optimized State : Supported 00:27:07.708 ANA Non-Optimized State : Supported 00:27:07.708 ANA Inaccessible State : Supported 00:27:07.708 ANA Persistent Loss State : Supported 00:27:07.708 ANA Change State : Supported 00:27:07.708 ANAGRPID is not changed : No 00:27:07.708 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:07.708 00:27:07.708 ANA Group Identifier Maximum : 128 00:27:07.708 Number of ANA Group Identifiers : 128 00:27:07.708 Max Number of Allowed Namespaces : 1024 00:27:07.708 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:07.708 Command Effects Log Page: Supported 00:27:07.708 Get Log Page Extended Data: Supported 00:27:07.708 Telemetry Log Pages: Not Supported 00:27:07.708 Persistent Event Log Pages: Not Supported 00:27:07.708 Supported Log Pages Log Page: May Support 00:27:07.708 Commands Supported & Effects Log Page: Not Supported 00:27:07.708 Feature Identifiers & Effects Log Page:May Support 00:27:07.708 NVMe-MI Commands & Effects Log Page: May Support 00:27:07.708 Data Area 4 for Telemetry Log: Not Supported 00:27:07.708 Error Log Page Entries Supported: 128 00:27:07.708 Keep Alive: Supported 00:27:07.708 Keep Alive Granularity: 1000 ms 00:27:07.708 00:27:07.708 NVM Command Set Attributes 00:27:07.708 ========================== 00:27:07.708 Submission Queue Entry Size 00:27:07.708 Max: 64 00:27:07.708 Min: 64 00:27:07.708 Completion Queue Entry Size 00:27:07.708 Max: 16 00:27:07.708 Min: 16 00:27:07.708 Number of Namespaces: 1024 00:27:07.708 Compare Command: Not Supported 00:27:07.708 Write Uncorrectable Command: Not Supported 00:27:07.708 Dataset Management Command: Supported 00:27:07.708 Write Zeroes Command: Supported 00:27:07.708 Set Features Save Field: Not Supported 00:27:07.708 Reservations: Not Supported 00:27:07.708 Timestamp: Not Supported 00:27:07.708 Copy: Not Supported 00:27:07.708 Volatile Write Cache: Present 00:27:07.708 Atomic Write Unit (Normal): 1 00:27:07.708 Atomic Write Unit (PFail): 1 00:27:07.708 Atomic Compare & Write Unit: 1 00:27:07.708 Fused Compare & Write: Not Supported 00:27:07.708 Scatter-Gather List 00:27:07.708 SGL Command Set: Supported 00:27:07.708 SGL Keyed: Not Supported 00:27:07.708 SGL Bit Bucket Descriptor: Not Supported 00:27:07.708 SGL Metadata Pointer: Not Supported 00:27:07.708 Oversized SGL: Not Supported 00:27:07.708 SGL Metadata Address: Not Supported 00:27:07.708 SGL Offset: Supported 00:27:07.708 Transport SGL Data Block: Not Supported 00:27:07.708 Replay Protected Memory Block: Not Supported 00:27:07.708 00:27:07.708 Firmware Slot Information 00:27:07.708 ========================= 00:27:07.708 Active slot: 0 00:27:07.708 00:27:07.708 Asymmetric Namespace Access 00:27:07.708 =========================== 00:27:07.708 Change Count : 0 00:27:07.708 Number of ANA Group Descriptors : 1 00:27:07.708 ANA Group Descriptor : 0 00:27:07.708 ANA Group ID : 1 00:27:07.708 Number of NSID Values : 1 00:27:07.708 Change Count : 0 00:27:07.708 ANA State : 1 00:27:07.708 Namespace Identifier : 1 00:27:07.708 00:27:07.708 Commands Supported and Effects 00:27:07.708 ============================== 00:27:07.708 Admin Commands 00:27:07.708 -------------- 00:27:07.708 Get Log Page (02h): Supported 00:27:07.708 Identify (06h): Supported 00:27:07.709 Abort (08h): Supported 00:27:07.709 Set Features (09h): Supported 00:27:07.709 Get Features (0Ah): Supported 00:27:07.709 Asynchronous Event Request (0Ch): Supported 00:27:07.709 Keep Alive (18h): Supported 00:27:07.709 I/O Commands 00:27:07.709 ------------ 00:27:07.709 Flush (00h): Supported 00:27:07.709 Write (01h): Supported LBA-Change 00:27:07.709 Read (02h): Supported 00:27:07.709 Write Zeroes (08h): Supported LBA-Change 00:27:07.709 Dataset Management (09h): Supported 00:27:07.709 00:27:07.709 Error Log 00:27:07.709 ========= 00:27:07.709 Entry: 0 00:27:07.709 Error Count: 0x3 00:27:07.709 Submission Queue Id: 0x0 00:27:07.709 Command Id: 0x5 00:27:07.709 Phase Bit: 0 00:27:07.709 Status Code: 0x2 00:27:07.709 Status Code Type: 0x0 00:27:07.709 Do Not Retry: 1 00:27:07.709 Error Location: 0x28 00:27:07.709 LBA: 0x0 00:27:07.709 Namespace: 0x0 00:27:07.709 Vendor Log Page: 0x0 00:27:07.709 ----------- 00:27:07.709 Entry: 1 00:27:07.709 Error Count: 0x2 00:27:07.709 Submission Queue Id: 0x0 00:27:07.709 Command Id: 0x5 00:27:07.709 Phase Bit: 0 00:27:07.709 Status Code: 0x2 00:27:07.709 Status Code Type: 0x0 00:27:07.709 Do Not Retry: 1 00:27:07.709 Error Location: 0x28 00:27:07.709 LBA: 0x0 00:27:07.709 Namespace: 0x0 00:27:07.709 Vendor Log Page: 0x0 00:27:07.709 ----------- 00:27:07.709 Entry: 2 00:27:07.709 Error Count: 0x1 00:27:07.709 Submission Queue Id: 0x0 00:27:07.709 Command Id: 0x4 00:27:07.709 Phase Bit: 0 00:27:07.709 Status Code: 0x2 00:27:07.709 Status Code Type: 0x0 00:27:07.709 Do Not Retry: 1 00:27:07.709 Error Location: 0x28 00:27:07.709 LBA: 0x0 00:27:07.709 Namespace: 0x0 00:27:07.709 Vendor Log Page: 0x0 00:27:07.709 00:27:07.709 Number of Queues 00:27:07.709 ================ 00:27:07.709 Number of I/O Submission Queues: 128 00:27:07.709 Number of I/O Completion Queues: 128 00:27:07.709 00:27:07.709 ZNS Specific Controller Data 00:27:07.709 ============================ 00:27:07.709 Zone Append Size Limit: 0 00:27:07.709 00:27:07.709 00:27:07.709 Active Namespaces 00:27:07.709 ================= 00:27:07.709 get_feature(0x05) failed 00:27:07.709 Namespace ID:1 00:27:07.709 Command Set Identifier: NVM (00h) 00:27:07.709 Deallocate: Supported 00:27:07.709 Deallocated/Unwritten Error: Not Supported 00:27:07.709 Deallocated Read Value: Unknown 00:27:07.709 Deallocate in Write Zeroes: Not Supported 00:27:07.709 Deallocated Guard Field: 0xFFFF 00:27:07.709 Flush: Supported 00:27:07.709 Reservation: Not Supported 00:27:07.709 Namespace Sharing Capabilities: Multiple Controllers 00:27:07.709 Size (in LBAs): 1953525168 (931GiB) 00:27:07.709 Capacity (in LBAs): 1953525168 (931GiB) 00:27:07.709 Utilization (in LBAs): 1953525168 (931GiB) 00:27:07.709 UUID: 28c7510d-1460-458b-8e0e-e692e02fb558 00:27:07.709 Thin Provisioning: Not Supported 00:27:07.709 Per-NS Atomic Units: Yes 00:27:07.709 Atomic Boundary Size (Normal): 0 00:27:07.709 Atomic Boundary Size (PFail): 0 00:27:07.709 Atomic Boundary Offset: 0 00:27:07.709 NGUID/EUI64 Never Reused: No 00:27:07.709 ANA group ID: 1 00:27:07.709 Namespace Write Protected: No 00:27:07.709 Number of LBA Formats: 1 00:27:07.709 Current LBA Format: LBA Format #00 00:27:07.709 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:07.709 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:07.709 rmmod nvme_tcp 00:27:07.709 rmmod nvme_fabrics 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.709 13:26:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.240 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.240 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:10.240 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:10.240 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:10.240 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.241 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.241 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.241 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.241 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.241 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:10.241 13:26:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:11.178 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:11.178 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:11.178 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:11.178 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:11.178 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:11.178 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:11.178 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:11.178 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:11.178 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:11.178 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:11.178 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:11.178 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:11.178 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:11.178 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:11.178 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:11.178 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:12.115 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:12.374 00:27:12.374 real 0m9.958s 00:27:12.374 user 0m2.158s 00:27:12.374 sys 0m3.813s 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.374 ************************************ 00:27:12.374 END TEST nvmf_identify_kernel_target 00:27:12.374 ************************************ 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.374 ************************************ 00:27:12.374 START TEST nvmf_auth_host 00:27:12.374 ************************************ 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:12.374 * Looking for test storage... 00:27:12.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:12.374 13:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:12.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.374 --rc genhtml_branch_coverage=1 00:27:12.374 --rc genhtml_function_coverage=1 00:27:12.374 --rc genhtml_legend=1 00:27:12.374 --rc geninfo_all_blocks=1 00:27:12.374 --rc geninfo_unexecuted_blocks=1 00:27:12.374 00:27:12.374 ' 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:12.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.374 --rc genhtml_branch_coverage=1 00:27:12.374 --rc genhtml_function_coverage=1 00:27:12.374 --rc genhtml_legend=1 00:27:12.374 --rc geninfo_all_blocks=1 00:27:12.374 --rc geninfo_unexecuted_blocks=1 00:27:12.374 00:27:12.374 ' 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:12.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.374 --rc genhtml_branch_coverage=1 00:27:12.374 --rc genhtml_function_coverage=1 00:27:12.374 --rc genhtml_legend=1 00:27:12.374 --rc geninfo_all_blocks=1 00:27:12.374 --rc geninfo_unexecuted_blocks=1 00:27:12.374 00:27:12.374 ' 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:12.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:12.374 --rc genhtml_branch_coverage=1 00:27:12.374 --rc genhtml_function_coverage=1 00:27:12.374 --rc genhtml_legend=1 00:27:12.374 --rc geninfo_all_blocks=1 00:27:12.374 --rc geninfo_unexecuted_blocks=1 00:27:12.374 00:27:12.374 ' 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.374 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.375 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.375 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.375 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.375 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.375 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.635 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:12.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:12.636 13:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.542 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.542 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:14.542 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:14.542 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:14.542 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:14.542 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:14.542 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:14.543 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:14.543 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:14.543 Found net devices under 0000:09:00.0: cvl_0_0 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:14.543 Found net devices under 0000:09:00.1: cvl_0_1 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.543 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.802 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.802 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.802 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:14.802 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.802 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.802 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.802 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.802 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:14.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:27:14.802 00:27:14.802 --- 10.0.0.2 ping statistics --- 00:27:14.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.803 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:27:14.803 00:27:14.803 --- 10.0.0.1 ping statistics --- 00:27:14.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.803 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3266865 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3266865 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3266865 ']' 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.803 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d77dba745e94aa7210ad2f30efc40efe 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SOS 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d77dba745e94aa7210ad2f30efc40efe 0 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d77dba745e94aa7210ad2f30efc40efe 0 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d77dba745e94aa7210ad2f30efc40efe 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SOS 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SOS 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.SOS 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2660656b1db24e8a4489ea1927bc27daad208b8d853c688c6edd2b63f3dd538b 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KZf 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2660656b1db24e8a4489ea1927bc27daad208b8d853c688c6edd2b63f3dd538b 3 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2660656b1db24e8a4489ea1927bc27daad208b8d853c688c6edd2b63f3dd538b 3 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2660656b1db24e8a4489ea1927bc27daad208b8d853c688c6edd2b63f3dd538b 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:15.061 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KZf 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KZf 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.KZf 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d4e847714ca9016e38f6e6818e0a4fab81ec1509128a107c 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cfJ 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d4e847714ca9016e38f6e6818e0a4fab81ec1509128a107c 0 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d4e847714ca9016e38f6e6818e0a4fab81ec1509128a107c 0 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d4e847714ca9016e38f6e6818e0a4fab81ec1509128a107c 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cfJ 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cfJ 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.cfJ 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c3fd83df4f24edfd388bdde690c415501577c4644aba628a 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.puX 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c3fd83df4f24edfd388bdde690c415501577c4644aba628a 2 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c3fd83df4f24edfd388bdde690c415501577c4644aba628a 2 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.319 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c3fd83df4f24edfd388bdde690c415501577c4644aba628a 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.puX 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.puX 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.puX 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0d9669178f7f7c23156751187b9f2c20 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xhm 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0d9669178f7f7c23156751187b9f2c20 1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0d9669178f7f7c23156751187b9f2c20 1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0d9669178f7f7c23156751187b9f2c20 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xhm 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xhm 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xhm 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d1f2f3683742d795800679e08ab2a813 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2A0 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d1f2f3683742d795800679e08ab2a813 1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d1f2f3683742d795800679e08ab2a813 1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d1f2f3683742d795800679e08ab2a813 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:15.320 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2A0 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2A0 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2A0 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=50561351db32a2689bc9026b762b287286dc27d9c1b61b27 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Cpt 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 50561351db32a2689bc9026b762b287286dc27d9c1b61b27 2 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 50561351db32a2689bc9026b762b287286dc27d9c1b61b27 2 00:27:15.578 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.579 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.579 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=50561351db32a2689bc9026b762b287286dc27d9c1b61b27 00:27:15.579 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:15.579 13:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Cpt 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Cpt 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Cpt 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3639370de76c033e00d2e115f05756bb 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Vdz 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3639370de76c033e00d2e115f05756bb 0 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3639370de76c033e00d2e115f05756bb 0 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3639370de76c033e00d2e115f05756bb 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Vdz 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Vdz 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Vdz 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eb3e73edc550eba97b3ba86dfae760ac5ba459287adf3cc2d9bcf92956854833 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.T1v 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eb3e73edc550eba97b3ba86dfae760ac5ba459287adf3cc2d9bcf92956854833 3 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eb3e73edc550eba97b3ba86dfae760ac5ba459287adf3cc2d9bcf92956854833 3 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eb3e73edc550eba97b3ba86dfae760ac5ba459287adf3cc2d9bcf92956854833 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.T1v 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.T1v 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.T1v 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3266865 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3266865 ']' 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.579 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SOS 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.KZf ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KZf 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cfJ 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.puX ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.puX 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xhm 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2A0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2A0 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Cpt 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Vdz ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Vdz 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.T1v 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.838 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:15.839 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:16.097 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:16.097 13:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:17.032 Waiting for block devices as requested 00:27:17.032 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:17.032 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:17.289 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:17.289 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:17.290 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:17.547 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:17.548 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:17.548 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:17.548 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:17.805 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:17.805 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:17.805 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:18.063 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:18.063 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:18.063 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:18.063 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:18.063 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:18.629 No valid GPT data, bailing 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:18.629 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:18.630 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:18.888 00:27:18.888 Discovery Log Number of Records 2, Generation counter 2 00:27:18.888 =====Discovery Log Entry 0====== 00:27:18.888 trtype: tcp 00:27:18.888 adrfam: ipv4 00:27:18.888 subtype: current discovery subsystem 00:27:18.888 treq: not specified, sq flow control disable supported 00:27:18.888 portid: 1 00:27:18.888 trsvcid: 4420 00:27:18.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:18.888 traddr: 10.0.0.1 00:27:18.888 eflags: none 00:27:18.888 sectype: none 00:27:18.888 =====Discovery Log Entry 1====== 00:27:18.888 trtype: tcp 00:27:18.888 adrfam: ipv4 00:27:18.888 subtype: nvme subsystem 00:27:18.888 treq: not specified, sq flow control disable supported 00:27:18.888 portid: 1 00:27:18.888 trsvcid: 4420 00:27:18.888 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:18.888 traddr: 10.0.0.1 00:27:18.888 eflags: none 00:27:18.888 sectype: none 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.888 nvme0n1 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.888 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.147 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.148 nvme0n1 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.148 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.407 nvme0n1 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.407 13:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.407 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.666 nvme0n1 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.666 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.667 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 nvme0n1 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.925 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.184 nvme0n1 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.184 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.442 nvme0n1 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.442 13:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.701 nvme0n1 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.701 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.960 nvme0n1 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.960 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.219 nvme0n1 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.219 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.478 nvme0n1 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.478 13:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.737 nvme0n1 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.737 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.995 nvme0n1 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.995 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.253 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.512 nvme0n1 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.512 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.513 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.513 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.513 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.513 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.513 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.513 13:26:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 nvme0n1 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.771 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.772 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.030 nvme0n1 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.030 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.031 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.289 13:26:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.548 nvme0n1 00:27:23.548 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.548 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.548 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.548 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.548 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.548 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.806 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.807 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.807 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.807 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.807 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.807 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.373 nvme0n1 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.373 13:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.939 nvme0n1 00:27:24.939 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.939 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.939 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.940 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.505 nvme0n1 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.506 13:26:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.785 nvme0n1 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.785 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.043 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.044 13:26:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.053 nvme0n1 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.053 13:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.620 nvme0n1 00:27:27.620 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.620 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.620 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.620 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.620 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.620 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.878 13:26:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.811 nvme0n1 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.811 13:26:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.744 nvme0n1 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:29.744 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.745 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.311 nvme0n1 00:27:30.311 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.311 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.311 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.311 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.311 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.569 13:26:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.569 nvme0n1 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.569 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.828 nvme0n1 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.828 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.829 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.829 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.829 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.829 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.829 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.088 nvme0n1 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.088 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.346 nvme0n1 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.346 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.347 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.347 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.347 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.347 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.347 13:26:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.604 nvme0n1 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:31.604 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.605 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.862 nvme0n1 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.862 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.121 nvme0n1 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.121 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.380 nvme0n1 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.380 13:26:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.639 nvme0n1 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.639 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.640 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.899 nvme0n1 00:27:32.899 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.899 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.899 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.900 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.159 nvme0n1 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.159 13:26:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.418 nvme0n1 00:27:33.418 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.418 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.418 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.418 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.678 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.936 nvme0n1 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.937 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.195 nvme0n1 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.195 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.196 13:26:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.454 nvme0n1 00:27:34.454 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.454 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.454 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.454 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.454 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.454 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.712 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.970 nvme0n1 00:27:34.970 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.970 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.970 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.970 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.970 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.970 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.228 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.229 13:26:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.795 nvme0n1 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.795 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.053 nvme0n1 00:27:36.053 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.053 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.053 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.053 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.312 13:26:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.878 nvme0n1 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.878 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.443 nvme0n1 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.443 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.444 13:26:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.378 nvme0n1 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.378 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.379 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.379 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.379 13:26:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.945 nvme0n1 00:27:38.945 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.945 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.945 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.945 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.945 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.945 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.203 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.204 13:26:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.138 nvme0n1 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.138 13:26:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.073 nvme0n1 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.073 13:26:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.008 nvme0n1 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.008 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.009 nvme0n1 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.009 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.010 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.010 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.268 nvme0n1 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.268 13:26:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.526 nvme0n1 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.526 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.784 nvme0n1 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.784 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.043 nvme0n1 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.043 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.044 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.044 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.044 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.044 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.044 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.044 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.044 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.302 nvme0n1 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:43.302 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.303 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.561 nvme0n1 00:27:43.561 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.561 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.561 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.561 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.561 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.562 13:26:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.562 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.820 nvme0n1 00:27:43.820 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.820 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.820 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.820 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.821 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.080 nvme0n1 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.080 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.338 nvme0n1 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:44.338 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.339 13:26:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.597 nvme0n1 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.597 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.856 nvme0n1 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:44.856 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.114 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.373 nvme0n1 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.373 13:26:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.631 nvme0n1 00:27:45.631 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.631 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.631 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.632 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.898 nvme0n1 00:27:45.898 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.899 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.900 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.901 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.901 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.901 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.901 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.901 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.901 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.471 nvme0n1 00:27:46.471 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.471 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.471 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.471 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.471 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.471 13:26:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.471 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.038 nvme0n1 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.038 13:26:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.604 nvme0n1 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.605 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.170 nvme0n1 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.170 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.171 13:26:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.738 nvme0n1 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDc3ZGJhNzQ1ZTk0YWE3MjEwYWQyZjMwZWZjNDBlZmV79/t1: 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjY2MDY1NmIxZGIyNGU4YTQ0ODllYTE5MjdiYzI3ZGFhZDIwOGI4ZDg1M2M2ODhjNmVkZDJiNjNmM2RkNTM4Ys72y6E=: 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.738 13:26:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.673 nvme0n1 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.673 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.618 nvme0n1 00:27:50.618 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.619 13:26:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.619 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.251 nvme0n1 00:27:51.251 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.251 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.251 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.251 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.251 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.251 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTA1NjEzNTFkYjMyYTI2ODliYzkwMjZiNzYyYjI4NzI4NmRjMjdkOWMxYjYxYjI3oXk8qg==: 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: ]] 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzYzOTM3MGRlNzZjMDMzZTAwZDJlMTE1ZjA1NzU2YmI4VQJZ: 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.509 13:26:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.444 nvme0n1 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.444 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWIzZTczZWRjNTUwZWJhOTdiM2JhODZkZmFlNzYwYWM1YmE0NTkyODdhZGYzY2MyZDliY2Y5Mjk1Njg1NDgzM8I9MoM=: 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.445 13:26:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.381 nvme0n1 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.381 request: 00:27:53.381 { 00:27:53.381 "name": "nvme0", 00:27:53.381 "trtype": "tcp", 00:27:53.381 "traddr": "10.0.0.1", 00:27:53.381 "adrfam": "ipv4", 00:27:53.381 "trsvcid": "4420", 00:27:53.381 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:53.381 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:53.381 "prchk_reftag": false, 00:27:53.381 "prchk_guard": false, 00:27:53.381 "hdgst": false, 00:27:53.381 "ddgst": false, 00:27:53.381 "allow_unrecognized_csi": false, 00:27:53.381 "method": "bdev_nvme_attach_controller", 00:27:53.381 "req_id": 1 00:27:53.381 } 00:27:53.381 Got JSON-RPC error response 00:27:53.381 response: 00:27:53.381 { 00:27:53.381 "code": -5, 00:27:53.381 "message": "Input/output error" 00:27:53.381 } 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.381 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.382 13:26:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.382 request: 00:27:53.382 { 00:27:53.382 "name": "nvme0", 00:27:53.382 "trtype": "tcp", 00:27:53.382 "traddr": "10.0.0.1", 00:27:53.382 "adrfam": "ipv4", 00:27:53.382 "trsvcid": "4420", 00:27:53.382 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:53.382 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:53.382 "prchk_reftag": false, 00:27:53.382 "prchk_guard": false, 00:27:53.382 "hdgst": false, 00:27:53.382 "ddgst": false, 00:27:53.382 "dhchap_key": "key2", 00:27:53.382 "allow_unrecognized_csi": false, 00:27:53.382 "method": "bdev_nvme_attach_controller", 00:27:53.382 "req_id": 1 00:27:53.382 } 00:27:53.382 Got JSON-RPC error response 00:27:53.382 response: 00:27:53.382 { 00:27:53.382 "code": -5, 00:27:53.382 "message": "Input/output error" 00:27:53.382 } 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.382 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.640 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.640 request: 00:27:53.640 { 00:27:53.640 "name": "nvme0", 00:27:53.640 "trtype": "tcp", 00:27:53.640 "traddr": "10.0.0.1", 00:27:53.640 "adrfam": "ipv4", 00:27:53.640 "trsvcid": "4420", 00:27:53.640 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:53.640 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:53.640 "prchk_reftag": false, 00:27:53.640 "prchk_guard": false, 00:27:53.640 "hdgst": false, 00:27:53.640 "ddgst": false, 00:27:53.640 "dhchap_key": "key1", 00:27:53.641 "dhchap_ctrlr_key": "ckey2", 00:27:53.641 "allow_unrecognized_csi": false, 00:27:53.641 "method": "bdev_nvme_attach_controller", 00:27:53.641 "req_id": 1 00:27:53.641 } 00:27:53.641 Got JSON-RPC error response 00:27:53.641 response: 00:27:53.641 { 00:27:53.641 "code": -5, 00:27:53.641 "message": "Input/output error" 00:27:53.641 } 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.641 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.899 nvme0n1 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.899 request: 00:27:53.899 { 00:27:53.899 "name": "nvme0", 00:27:53.899 "dhchap_key": "key1", 00:27:53.899 "dhchap_ctrlr_key": "ckey2", 00:27:53.899 "method": "bdev_nvme_set_keys", 00:27:53.899 "req_id": 1 00:27:53.899 } 00:27:53.899 Got JSON-RPC error response 00:27:53.899 response: 00:27:53.899 { 00:27:53.899 "code": -13, 00:27:53.899 "message": "Permission denied" 00:27:53.899 } 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:53.899 13:26:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:55.272 13:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.272 13:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:55.272 13:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.272 13:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.272 13:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.272 13:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:55.272 13:26:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDRlODQ3NzE0Y2E5MDE2ZTM4ZjZlNjgxOGUwYTRmYWI4MWVjMTUwOTEyOGExMDdjM3F37Q==: 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: ]] 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzNmZDgzZGY0ZjI0ZWRmZDM4OGJkZGU2OTBjNDE1NTAxNTc3YzQ2NDRhYmE2MjhhcVg4Hg==: 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.207 nvme0n1 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:56.207 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQ5NjY5MTc4ZjdmN2MyMzE1Njc1MTE4N2I5ZjJjMjCW4XT/: 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: ]] 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDFmMmYzNjgzNzQyZDc5NTgwMDY3OWUwOGFiMmE4MTPPwPMb: 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.208 request: 00:27:56.208 { 00:27:56.208 "name": "nvme0", 00:27:56.208 "dhchap_key": "key2", 00:27:56.208 "dhchap_ctrlr_key": "ckey1", 00:27:56.208 "method": "bdev_nvme_set_keys", 00:27:56.208 "req_id": 1 00:27:56.208 } 00:27:56.208 Got JSON-RPC error response 00:27:56.208 response: 00:27:56.208 { 00:27:56.208 "code": -13, 00:27:56.208 "message": "Permission denied" 00:27:56.208 } 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.208 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.466 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:56.466 13:26:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:57.400 rmmod nvme_tcp 00:27:57.400 rmmod nvme_fabrics 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3266865 ']' 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3266865 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3266865 ']' 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3266865 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266865 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266865' 00:27:57.400 killing process with pid 3266865 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3266865 00:27:57.400 13:26:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3266865 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.659 13:26:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:00.197 13:26:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:01.132 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:01.132 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:01.132 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:01.132 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:01.132 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:01.132 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:01.132 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:01.132 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:01.132 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:01.132 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:01.132 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:01.132 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:01.132 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:01.132 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:01.132 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:01.133 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:02.080 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:02.080 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.SOS /tmp/spdk.key-null.cfJ /tmp/spdk.key-sha256.xhm /tmp/spdk.key-sha384.Cpt /tmp/spdk.key-sha512.T1v /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:02.080 13:26:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:03.458 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:03.458 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:03.458 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:03.458 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:03.458 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:03.458 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:03.458 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:03.458 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:03.458 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:03.458 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:03.458 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:03.458 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:03.458 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:03.458 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:03.458 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:03.458 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:03.458 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:03.458 00:28:03.458 real 0m51.140s 00:28:03.458 user 0m48.862s 00:28:03.458 sys 0m6.112s 00:28:03.458 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.458 13:27:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.458 ************************************ 00:28:03.458 END TEST nvmf_auth_host 00:28:03.458 ************************************ 00:28:03.458 13:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:03.458 13:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:03.458 13:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.458 13:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.458 13:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.458 ************************************ 00:28:03.458 START TEST nvmf_digest 00:28:03.458 ************************************ 00:28:03.458 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:03.717 * Looking for test storage... 00:28:03.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.717 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:03.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.718 --rc genhtml_branch_coverage=1 00:28:03.718 --rc genhtml_function_coverage=1 00:28:03.718 --rc genhtml_legend=1 00:28:03.718 --rc geninfo_all_blocks=1 00:28:03.718 --rc geninfo_unexecuted_blocks=1 00:28:03.718 00:28:03.718 ' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:03.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.718 --rc genhtml_branch_coverage=1 00:28:03.718 --rc genhtml_function_coverage=1 00:28:03.718 --rc genhtml_legend=1 00:28:03.718 --rc geninfo_all_blocks=1 00:28:03.718 --rc geninfo_unexecuted_blocks=1 00:28:03.718 00:28:03.718 ' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:03.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.718 --rc genhtml_branch_coverage=1 00:28:03.718 --rc genhtml_function_coverage=1 00:28:03.718 --rc genhtml_legend=1 00:28:03.718 --rc geninfo_all_blocks=1 00:28:03.718 --rc geninfo_unexecuted_blocks=1 00:28:03.718 00:28:03.718 ' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:03.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.718 --rc genhtml_branch_coverage=1 00:28:03.718 --rc genhtml_function_coverage=1 00:28:03.718 --rc genhtml_legend=1 00:28:03.718 --rc geninfo_all_blocks=1 00:28:03.718 --rc geninfo_unexecuted_blocks=1 00:28:03.718 00:28:03.718 ' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.718 13:27:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.624 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:05.625 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:05.625 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:05.625 Found net devices under 0000:09:00.0: cvl_0_0 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:05.625 Found net devices under 0000:09:00.1: cvl_0_1 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:05.625 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:05.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:28:05.884 00:28:05.884 --- 10.0.0.2 ping statistics --- 00:28:05.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.884 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:28:05.884 00:28:05.884 --- 10.0.0.1 ping statistics --- 00:28:05.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.884 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.884 ************************************ 00:28:05.884 START TEST nvmf_digest_clean 00:28:05.884 ************************************ 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3276583 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3276583 00:28:05.884 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3276583 ']' 00:28:05.885 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.885 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.885 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.885 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.885 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.885 [2024-11-25 13:27:03.403844] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:05.885 [2024-11-25 13:27:03.403916] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.885 [2024-11-25 13:27:03.474763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.885 [2024-11-25 13:27:03.530950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.885 [2024-11-25 13:27:03.530998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.885 [2024-11-25 13:27:03.531011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.885 [2024-11-25 13:27:03.531022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.885 [2024-11-25 13:27:03.531031] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.885 [2024-11-25 13:27:03.531596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.143 null0 00:28:06.143 [2024-11-25 13:27:03.760025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.143 [2024-11-25 13:27:03.784225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:06.143 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3276608 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3276608 /var/tmp/bperf.sock 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3276608 ']' 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.144 13:27:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.401 [2024-11-25 13:27:03.832915] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:06.401 [2024-11-25 13:27:03.832976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276608 ] 00:28:06.401 [2024-11-25 13:27:03.897873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.401 [2024-11-25 13:27:03.954595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.659 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.659 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:06.659 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.659 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.659 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.917 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.917 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.175 nvme0n1 00:28:07.175 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:07.175 13:27:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.434 Running I/O for 2 seconds... 00:28:09.305 18390.00 IOPS, 71.84 MiB/s [2024-11-25T12:27:07.223Z] 18455.50 IOPS, 72.09 MiB/s 00:28:09.564 Latency(us) 00:28:09.564 [2024-11-25T12:27:07.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.564 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:09.564 nvme0n1 : 2.04 18118.71 70.78 0.00 0.00 6923.56 3325.35 47185.92 00:28:09.564 [2024-11-25T12:27:07.223Z] =================================================================================================================== 00:28:09.564 [2024-11-25T12:27:07.223Z] Total : 18118.71 70.78 0.00 0.00 6923.56 3325.35 47185.92 00:28:09.564 { 00:28:09.564 "results": [ 00:28:09.564 { 00:28:09.564 "job": "nvme0n1", 00:28:09.564 "core_mask": "0x2", 00:28:09.564 "workload": "randread", 00:28:09.564 "status": "finished", 00:28:09.564 "queue_depth": 128, 00:28:09.564 "io_size": 4096, 00:28:09.564 "runtime": 2.04424, 00:28:09.564 "iops": 18118.714045317574, 00:28:09.564 "mibps": 70.77622673952177, 00:28:09.564 "io_failed": 0, 00:28:09.564 "io_timeout": 0, 00:28:09.564 "avg_latency_us": 6923.564862902267, 00:28:09.564 "min_latency_us": 3325.345185185185, 00:28:09.564 "max_latency_us": 47185.92 00:28:09.564 } 00:28:09.564 ], 00:28:09.564 "core_count": 1 00:28:09.564 } 00:28:09.564 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:09.564 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:09.564 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:09.564 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:09.564 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:09.564 | select(.opcode=="crc32c") 00:28:09.564 | "\(.module_name) \(.executed)"' 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3276608 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3276608 ']' 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3276608 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276608 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276608' 00:28:09.824 killing process with pid 3276608 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3276608 00:28:09.824 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.824 00:28:09.824 Latency(us) 00:28:09.824 [2024-11-25T12:27:07.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.824 [2024-11-25T12:27:07.483Z] =================================================================================================================== 00:28:09.824 [2024-11-25T12:27:07.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.824 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3276608 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3277251 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3277251 /var/tmp/bperf.sock 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3277251 ']' 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.082 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:10.082 [2024-11-25 13:27:07.600945] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:10.082 [2024-11-25 13:27:07.601047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277251 ] 00:28:10.082 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:10.082 Zero copy mechanism will not be used. 00:28:10.082 [2024-11-25 13:27:07.669000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.082 [2024-11-25 13:27:07.730252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.340 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.340 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:10.340 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:10.340 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:10.340 13:27:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:10.599 13:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.599 13:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.166 nvme0n1 00:28:11.166 13:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:11.166 13:27:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:11.166 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:11.166 Zero copy mechanism will not be used. 00:28:11.166 Running I/O for 2 seconds... 00:28:13.047 5912.00 IOPS, 739.00 MiB/s [2024-11-25T12:27:10.706Z] 6100.00 IOPS, 762.50 MiB/s 00:28:13.047 Latency(us) 00:28:13.047 [2024-11-25T12:27:10.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.047 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:13.047 nvme0n1 : 2.00 6100.42 762.55 0.00 0.00 2618.61 719.08 7670.14 00:28:13.047 [2024-11-25T12:27:10.706Z] =================================================================================================================== 00:28:13.047 [2024-11-25T12:27:10.706Z] Total : 6100.42 762.55 0.00 0.00 2618.61 719.08 7670.14 00:28:13.047 { 00:28:13.047 "results": [ 00:28:13.047 { 00:28:13.047 "job": "nvme0n1", 00:28:13.047 "core_mask": "0x2", 00:28:13.047 "workload": "randread", 00:28:13.047 "status": "finished", 00:28:13.047 "queue_depth": 16, 00:28:13.047 "io_size": 131072, 00:28:13.047 "runtime": 2.002484, 00:28:13.047 "iops": 6100.423274293327, 00:28:13.047 "mibps": 762.5529092866659, 00:28:13.047 "io_failed": 0, 00:28:13.047 "io_timeout": 0, 00:28:13.047 "avg_latency_us": 2618.6072288922846, 00:28:13.047 "min_latency_us": 719.0755555555555, 00:28:13.047 "max_latency_us": 7670.139259259259 00:28:13.047 } 00:28:13.047 ], 00:28:13.047 "core_count": 1 00:28:13.047 } 00:28:13.047 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:13.047 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:13.047 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:13.047 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:13.047 | select(.opcode=="crc32c") 00:28:13.047 | "\(.module_name) \(.executed)"' 00:28:13.047 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3277251 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3277251 ']' 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3277251 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.613 13:27:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3277251 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3277251' 00:28:13.613 killing process with pid 3277251 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3277251 00:28:13.613 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.613 00:28:13.613 Latency(us) 00:28:13.613 [2024-11-25T12:27:11.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.613 [2024-11-25T12:27:11.272Z] =================================================================================================================== 00:28:13.613 [2024-11-25T12:27:11.272Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3277251 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3278049 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3278049 /var/tmp/bperf.sock 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3278049 ']' 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.613 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.613 [2024-11-25 13:27:11.249263] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:13.613 [2024-11-25 13:27:11.249394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278049 ] 00:28:13.871 [2024-11-25 13:27:11.317710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.871 [2024-11-25 13:27:11.376477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.871 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.871 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:13.871 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:13.871 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:13.871 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:14.438 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.438 13:27:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.695 nvme0n1 00:28:14.695 13:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.695 13:27:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.695 Running I/O for 2 seconds... 00:28:17.004 19987.00 IOPS, 78.07 MiB/s [2024-11-25T12:27:14.663Z] 19301.50 IOPS, 75.40 MiB/s 00:28:17.004 Latency(us) 00:28:17.004 [2024-11-25T12:27:14.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.004 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:17.004 nvme0n1 : 2.01 19298.08 75.38 0.00 0.00 6617.74 2645.71 11845.03 00:28:17.004 [2024-11-25T12:27:14.663Z] =================================================================================================================== 00:28:17.004 [2024-11-25T12:27:14.663Z] Total : 19298.08 75.38 0.00 0.00 6617.74 2645.71 11845.03 00:28:17.004 { 00:28:17.004 "results": [ 00:28:17.004 { 00:28:17.004 "job": "nvme0n1", 00:28:17.004 "core_mask": "0x2", 00:28:17.004 "workload": "randwrite", 00:28:17.004 "status": "finished", 00:28:17.004 "queue_depth": 128, 00:28:17.004 "io_size": 4096, 00:28:17.004 "runtime": 2.006573, 00:28:17.004 "iops": 19298.076870365545, 00:28:17.004 "mibps": 75.38311277486541, 00:28:17.004 "io_failed": 0, 00:28:17.004 "io_timeout": 0, 00:28:17.004 "avg_latency_us": 6617.742189166933, 00:28:17.004 "min_latency_us": 2645.7125925925925, 00:28:17.004 "max_latency_us": 11845.025185185184 00:28:17.004 } 00:28:17.004 ], 00:28:17.004 "core_count": 1 00:28:17.004 } 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:17.004 | select(.opcode=="crc32c") 00:28:17.004 | "\(.module_name) \(.executed)"' 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3278049 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3278049 ']' 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3278049 00:28:17.004 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:17.005 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.005 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3278049 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3278049' 00:28:17.263 killing process with pid 3278049 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3278049 00:28:17.263 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.263 00:28:17.263 Latency(us) 00:28:17.263 [2024-11-25T12:27:14.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.263 [2024-11-25T12:27:14.922Z] =================================================================================================================== 00:28:17.263 [2024-11-25T12:27:14.922Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3278049 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3278460 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3278460 /var/tmp/bperf.sock 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3278460 ']' 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.263 13:27:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:17.521 [2024-11-25 13:27:14.925515] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:17.521 [2024-11-25 13:27:14.925598] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278460 ] 00:28:17.522 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:17.522 Zero copy mechanism will not be used. 00:28:17.522 [2024-11-25 13:27:14.990704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.522 [2024-11-25 13:27:15.047927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.522 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.522 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:17.522 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.522 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.522 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:18.090 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.090 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.383 nvme0n1 00:28:18.383 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:18.383 13:27:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.667 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.667 Zero copy mechanism will not be used. 00:28:18.667 Running I/O for 2 seconds... 00:28:20.542 5672.00 IOPS, 709.00 MiB/s [2024-11-25T12:27:18.201Z] 5763.00 IOPS, 720.38 MiB/s 00:28:20.542 Latency(us) 00:28:20.542 [2024-11-25T12:27:18.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.542 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:20.542 nvme0n1 : 2.00 5761.75 720.22 0.00 0.00 2770.02 1723.35 5048.70 00:28:20.542 [2024-11-25T12:27:18.201Z] =================================================================================================================== 00:28:20.542 [2024-11-25T12:27:18.201Z] Total : 5761.75 720.22 0.00 0.00 2770.02 1723.35 5048.70 00:28:20.542 { 00:28:20.542 "results": [ 00:28:20.542 { 00:28:20.542 "job": "nvme0n1", 00:28:20.542 "core_mask": "0x2", 00:28:20.542 "workload": "randwrite", 00:28:20.543 "status": "finished", 00:28:20.543 "queue_depth": 16, 00:28:20.543 "io_size": 131072, 00:28:20.543 "runtime": 2.003905, 00:28:20.543 "iops": 5761.750182768145, 00:28:20.543 "mibps": 720.2187728460182, 00:28:20.543 "io_failed": 0, 00:28:20.543 "io_timeout": 0, 00:28:20.543 "avg_latency_us": 2770.0166050131197, 00:28:20.543 "min_latency_us": 1723.354074074074, 00:28:20.543 "max_latency_us": 5048.69925925926 00:28:20.543 } 00:28:20.543 ], 00:28:20.543 "core_count": 1 00:28:20.543 } 00:28:20.543 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:20.543 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:20.543 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:20.543 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:20.543 | select(.opcode=="crc32c") 00:28:20.543 | "\(.module_name) \(.executed)"' 00:28:20.543 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3278460 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3278460 ']' 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3278460 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3278460 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3278460' 00:28:20.800 killing process with pid 3278460 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3278460 00:28:20.800 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.800 00:28:20.800 Latency(us) 00:28:20.800 [2024-11-25T12:27:18.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.800 [2024-11-25T12:27:18.459Z] =================================================================================================================== 00:28:20.800 [2024-11-25T12:27:18.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.800 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3278460 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3276583 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3276583 ']' 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3276583 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276583 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276583' 00:28:21.057 killing process with pid 3276583 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3276583 00:28:21.057 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3276583 00:28:21.315 00:28:21.315 real 0m15.480s 00:28:21.315 user 0m31.143s 00:28:21.315 sys 0m4.235s 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:21.315 ************************************ 00:28:21.315 END TEST nvmf_digest_clean 00:28:21.315 ************************************ 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:21.315 ************************************ 00:28:21.315 START TEST nvmf_digest_error 00:28:21.315 ************************************ 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3278959 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3278959 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3278959 ']' 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.315 13:27:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.315 [2024-11-25 13:27:18.936740] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:21.315 [2024-11-25 13:27:18.936834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.599 [2024-11-25 13:27:19.012647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.599 [2024-11-25 13:27:19.071722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.599 [2024-11-25 13:27:19.071775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.599 [2024-11-25 13:27:19.071804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.599 [2024-11-25 13:27:19.071816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.599 [2024-11-25 13:27:19.071826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.599 [2024-11-25 13:27:19.072472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.599 [2024-11-25 13:27:19.197179] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.599 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.856 null0 00:28:21.856 [2024-11-25 13:27:19.315716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.856 [2024-11-25 13:27:19.339887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3279042 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3279042 /var/tmp/bperf.sock 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3279042 ']' 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:21.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.856 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.856 [2024-11-25 13:27:19.386008] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:21.856 [2024-11-25 13:27:19.386081] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279042 ] 00:28:21.856 [2024-11-25 13:27:19.450160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.856 [2024-11-25 13:27:19.506996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.114 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.114 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:22.114 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.114 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.370 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:22.370 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.370 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.370 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.370 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.370 13:27:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.935 nvme0n1 00:28:22.935 13:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:22.935 13:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.935 13:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.935 13:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.935 13:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:22.935 13:27:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.935 Running I/O for 2 seconds... 00:28:22.935 [2024-11-25 13:27:20.590407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:22.935 [2024-11-25 13:27:20.590451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.935 [2024-11-25 13:27:20.590471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.601208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.601237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.601269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.617257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.617290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.617318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.632181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.632213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.632245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.643515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.643543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.643584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.659454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.659485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.659502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.675448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.675479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.675511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.691235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.691263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.691294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.704246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.704276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.704293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.717013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.717043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.717076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.728807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.728835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.728866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.743383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.743412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.192 [2024-11-25 13:27:20.743429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.192 [2024-11-25 13:27:20.756854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.192 [2024-11-25 13:27:20.756883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.193 [2024-11-25 13:27:20.756913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.193 [2024-11-25 13:27:20.772572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.193 [2024-11-25 13:27:20.772604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.193 [2024-11-25 13:27:20.772637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.193 [2024-11-25 13:27:20.786029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.193 [2024-11-25 13:27:20.786060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.193 [2024-11-25 13:27:20.786077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.193 [2024-11-25 13:27:20.797620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.193 [2024-11-25 13:27:20.797649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.193 [2024-11-25 13:27:20.797664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.193 [2024-11-25 13:27:20.809989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.193 [2024-11-25 13:27:20.810016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.193 [2024-11-25 13:27:20.810046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.193 [2024-11-25 13:27:20.826790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.193 [2024-11-25 13:27:20.826820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.193 [2024-11-25 13:27:20.826837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.193 [2024-11-25 13:27:20.840365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.193 [2024-11-25 13:27:20.840411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.193 [2024-11-25 13:27:20.840429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.856159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.856194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.856213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.867444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.867474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.867506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.881606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.881635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.881656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.894255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.894299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.894324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.908773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.908802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.908818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.921907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.921937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.921969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.936262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.936293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.936319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.948081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.948112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.948142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.959291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.959340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.959357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.974984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.975012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.975044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:20.987367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:20.987396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:20.987427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.001400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.001434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.001465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.014212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.014239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.014270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.026962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.026990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.027021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.039314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.039345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.039363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.051877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.051908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.051926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.064196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.064225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.064256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.078852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.078881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.078911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.092361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.092396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.092415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.449 [2024-11-25 13:27:21.105855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.449 [2024-11-25 13:27:21.105887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.449 [2024-11-25 13:27:21.105904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.119335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.119367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.119385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.131050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.131079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.131110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.144406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.144435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.144466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.158445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.158477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.158510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.171744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.171772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.171803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.186262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.186292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.186335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.199856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.199884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.199914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.213022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.213051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.213081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.225542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.225573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.225609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.237331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.237362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.237380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.252635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.252666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.252683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.265127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.265170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.265186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.279744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.279775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.279793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.291163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.291195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.291212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.304786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.304818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.304835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.315813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.315856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.315871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.330321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.330353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.330370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.344910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.344938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.344970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.705 [2024-11-25 13:27:21.356356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.705 [2024-11-25 13:27:21.356387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.705 [2024-11-25 13:27:21.356405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.372555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.372583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.372618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.388517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.388547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.388579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.403560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.403590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.403626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.416828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.416857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.416889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.431559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.431590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.431608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.442186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.442219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.442237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.458200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.458228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.458264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.473204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.473233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.473264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.489270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.489311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.489331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.501383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.501414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.501447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.516115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.516146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.516177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.531762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.531793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.531810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.543065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.543093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.543123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.559341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.559387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.559403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 18568.00 IOPS, 72.53 MiB/s [2024-11-25T12:27:21.623Z] [2024-11-25 13:27:21.572211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.572240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.572256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.587052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.587090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.587108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.601644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.601690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.601707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.964 [2024-11-25 13:27:21.618748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:23.964 [2024-11-25 13:27:21.618779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.964 [2024-11-25 13:27:21.618796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.633576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.633612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.633635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.645415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.645443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.645474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.657825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.657869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.657886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.671009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.671038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.671070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.683848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.683893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.683909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.698136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.698167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.698184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.708673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.708701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.708730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.722260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.722292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.722319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.735806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.735837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.735854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.750669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.750701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.750719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.762554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.762586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.762604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.776671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.776701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.776732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.791631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.791662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.791695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.807596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.807628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.807645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.822568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.822599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.822623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.834215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.834246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.834279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.850441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.850471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.850504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.222 [2024-11-25 13:27:21.866498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.222 [2024-11-25 13:27:21.866531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.222 [2024-11-25 13:27:21.866548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.480 [2024-11-25 13:27:21.883926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.883957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.883990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:21.899093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.899125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.899143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:21.910110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.910142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.910159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:21.925702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.925731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.925762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:21.939116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.939147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.939165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:21.949719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.949751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.949769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:21.965362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.965402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.965435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:21.979501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.979532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.979550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:21.994428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:21.994459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:21.994491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.010190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.010219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.010250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.025197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.025227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.025259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.041626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.041655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.041686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.056468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.056499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.056517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.067223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.067251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.067290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.082204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.082234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.082264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.098184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.098213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.098244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.114442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.114471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.114504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.481 [2024-11-25 13:27:22.131127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.481 [2024-11-25 13:27:22.131156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.481 [2024-11-25 13:27:22.131186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.739 [2024-11-25 13:27:22.145100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.739 [2024-11-25 13:27:22.145130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.739 [2024-11-25 13:27:22.145162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.739 [2024-11-25 13:27:22.161822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.739 [2024-11-25 13:27:22.161854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.739 [2024-11-25 13:27:22.161872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.739 [2024-11-25 13:27:22.176339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.739 [2024-11-25 13:27:22.176370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.739 [2024-11-25 13:27:22.176388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.739 [2024-11-25 13:27:22.187791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.739 [2024-11-25 13:27:22.187820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.739 [2024-11-25 13:27:22.187852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.739 [2024-11-25 13:27:22.203382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.203422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.203440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.219822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.219852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.219883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.235073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.235103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.235120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.250792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.250824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.250842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.261379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.261410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.261428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.277327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.277376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.277395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.292145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.292191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.292207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.307963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.308009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.308025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.324298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.324339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.324357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.336117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.336149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.336181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.349916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.349947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.349965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.363045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.363075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.363091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.379666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.379698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.379715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.740 [2024-11-25 13:27:22.391893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.740 [2024-11-25 13:27:22.391922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.740 [2024-11-25 13:27:22.391939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.998 [2024-11-25 13:27:22.405561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.998 [2024-11-25 13:27:22.405595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.998 [2024-11-25 13:27:22.405613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.998 [2024-11-25 13:27:22.419440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.998 [2024-11-25 13:27:22.419473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.998 [2024-11-25 13:27:22.419490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.998 [2024-11-25 13:27:22.434262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.998 [2024-11-25 13:27:22.434294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.998 [2024-11-25 13:27:22.434337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.998 [2024-11-25 13:27:22.450094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.998 [2024-11-25 13:27:22.450127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.998 [2024-11-25 13:27:22.450151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.998 [2024-11-25 13:27:22.462007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.998 [2024-11-25 13:27:22.462038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.998 [2024-11-25 13:27:22.462056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.998 [2024-11-25 13:27:22.476570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.998 [2024-11-25 13:27:22.476600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.998 [2024-11-25 13:27:22.476617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.998 [2024-11-25 13:27:22.489856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.999 [2024-11-25 13:27:22.489903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.999 [2024-11-25 13:27:22.489920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.999 [2024-11-25 13:27:22.501161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.999 [2024-11-25 13:27:22.501190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.999 [2024-11-25 13:27:22.501220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.999 [2024-11-25 13:27:22.516894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.999 [2024-11-25 13:27:22.516939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.999 [2024-11-25 13:27:22.516958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.999 [2024-11-25 13:27:22.532092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.999 [2024-11-25 13:27:22.532120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.999 [2024-11-25 13:27:22.532152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.999 [2024-11-25 13:27:22.547122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.999 [2024-11-25 13:27:22.547152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.999 [2024-11-25 13:27:22.547185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.999 [2024-11-25 13:27:22.559186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.999 [2024-11-25 13:27:22.559214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.999 [2024-11-25 13:27:22.559245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.999 [2024-11-25 13:27:22.574678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2181c10) 00:28:24.999 [2024-11-25 13:27:22.574707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.999 [2024-11-25 13:27:22.574739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.999 18199.00 IOPS, 71.09 MiB/s 00:28:24.999 Latency(us) 00:28:24.999 [2024-11-25T12:27:22.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.999 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:24.999 nvme0n1 : 2.00 18198.52 71.09 0.00 0.00 7022.89 3592.34 23398.78 00:28:24.999 [2024-11-25T12:27:22.658Z] =================================================================================================================== 00:28:24.999 [2024-11-25T12:27:22.658Z] Total : 18198.52 71.09 0.00 0.00 7022.89 3592.34 23398.78 00:28:24.999 { 00:28:24.999 "results": [ 00:28:24.999 { 00:28:24.999 "job": "nvme0n1", 00:28:24.999 "core_mask": "0x2", 00:28:24.999 "workload": "randread", 00:28:24.999 "status": "finished", 00:28:24.999 "queue_depth": 128, 00:28:24.999 "io_size": 4096, 00:28:24.999 "runtime": 2.004778, 00:28:24.999 "iops": 18198.523726816635, 00:28:24.999 "mibps": 71.08798330787748, 00:28:24.999 "io_failed": 0, 00:28:24.999 "io_timeout": 0, 00:28:24.999 "avg_latency_us": 7022.887199340554, 00:28:24.999 "min_latency_us": 3592.343703703704, 00:28:24.999 "max_latency_us": 23398.77925925926 00:28:24.999 } 00:28:24.999 ], 00:28:24.999 "core_count": 1 00:28:24.999 } 00:28:24.999 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:24.999 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:24.999 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:24.999 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:24.999 | .driver_specific 00:28:24.999 | .nvme_error 00:28:24.999 | .status_code 00:28:24.999 | .command_transient_transport_error' 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3279042 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3279042 ']' 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3279042 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3279042 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3279042' 00:28:25.257 killing process with pid 3279042 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3279042 00:28:25.257 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.257 00:28:25.257 Latency(us) 00:28:25.257 [2024-11-25T12:27:22.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.257 [2024-11-25T12:27:22.916Z] =================================================================================================================== 00:28:25.257 [2024-11-25T12:27:22.916Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.257 13:27:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3279042 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3279452 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3279452 /var/tmp/bperf.sock 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3279452 ']' 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.515 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.774 [2024-11-25 13:27:23.182333] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:25.774 [2024-11-25 13:27:23.182417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279452 ] 00:28:25.774 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.774 Zero copy mechanism will not be used. 00:28:25.774 [2024-11-25 13:27:23.247368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.774 [2024-11-25 13:27:23.302441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.774 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.774 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:25.774 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.774 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.032 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:26.032 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.032 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.290 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.290 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.290 13:27:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.549 nvme0n1 00:28:26.807 13:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:26.807 13:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.807 13:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.807 13:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.807 13:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.807 13:27:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.807 Zero copy mechanism will not be used. 00:28:26.807 Running I/O for 2 seconds... 00:28:26.807 [2024-11-25 13:27:24.325835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.807 [2024-11-25 13:27:24.325879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.807 [2024-11-25 13:27:24.325914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.807 [2024-11-25 13:27:24.331250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.807 [2024-11-25 13:27:24.331299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.807 [2024-11-25 13:27:24.331323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.807 [2024-11-25 13:27:24.337883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.807 [2024-11-25 13:27:24.337913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.807 [2024-11-25 13:27:24.337943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.345670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.345700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.345716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.351411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.351441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.351472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.357652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.357681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.357712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.363644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.363675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.363716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.369772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.369803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.369836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.375553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.375584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.375602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.382146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.382191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.382210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.388359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.388391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.388409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.393105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.393136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.393153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.397601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.397631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.397663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.402203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.402232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.402265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.406750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.406794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.406811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.412063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.412098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.412133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.418940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.418970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.418986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.426572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.426603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.426621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.434828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.434857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.434889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.442581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.442610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.442626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.450233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.450267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.450310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:26.808 [2024-11-25 13:27:24.457856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:26.808 [2024-11-25 13:27:24.457885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.808 [2024-11-25 13:27:24.457916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.465586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.465634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.465652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.473115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.473163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.473186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.480805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.480850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.480866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.488519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.488550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.488585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.496226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.496273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.496298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.504034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.504064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.504096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.511760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.511792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.511810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.519549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.519581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.519599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.527433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.527464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.527482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.534454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.534502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.534519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.540335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.540388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.540406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.545754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.545785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.545816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.551087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.551117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.551150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.556370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.556401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.556418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.561960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.561992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.562009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.569614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.569665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.569683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.576566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.576598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.576616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.581010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.581042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.581059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.588351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.588383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.588401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.596204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.596249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.596266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.604160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.068 [2024-11-25 13:27:24.604205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.068 [2024-11-25 13:27:24.604221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.068 [2024-11-25 13:27:24.611864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.611894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.611925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.619509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.619540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.619573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.627225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.627271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.627287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.634800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.634847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.634864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.642495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.642541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.642559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.650234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.650267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.650300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.657811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.657842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.657880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.665528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.665572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.665588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.673001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.673045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.673061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.680049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.680080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.680112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.687706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.687737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.687771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.695275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.695313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.695333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.703267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.703299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.703341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.710489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.710522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.710540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.069 [2024-11-25 13:27:24.718831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.069 [2024-11-25 13:27:24.718877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.069 [2024-11-25 13:27:24.718894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.725856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.725895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.725914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.730820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.730851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.730882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.738928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.738957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.738992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.744817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.744847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.744878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.749464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.749494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.749511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.752589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.752632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.752648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.756619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.756650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.756667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.762311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.762340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.762372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.768964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.330 [2024-11-25 13:27:24.768995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.330 [2024-11-25 13:27:24.769012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.330 [2024-11-25 13:27:24.774517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.774549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.774566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.779735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.779766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.779783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.782713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.782741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.782773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.787532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.787560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.787591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.792483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.792513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.792545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.797210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.797238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.797270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.801936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.801964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.801995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.807469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.807500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.807517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.814493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.814524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.814548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.821219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.821248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.821280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.826695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.826739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.826756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.832455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.832486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.832503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.837511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.837542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.837575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.843666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.843711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.843728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.848195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.848239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.848256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.852879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.852909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.852941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.857463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.857494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.857511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.861887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.861916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.861949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.866818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.866847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.866878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.870682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.870710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.870741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.875205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.875248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.875264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.879643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.879687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.879703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.884196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.884224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.884255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.888596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.888624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.888655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.893198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.893226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.893241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.897694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.331 [2024-11-25 13:27:24.897724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.331 [2024-11-25 13:27:24.897747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.331 [2024-11-25 13:27:24.902851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.902882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.902900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.908264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.908295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.908322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.913890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.913935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.913952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.918990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.919019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.919051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.924547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.924579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.924596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.930470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.930517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.930535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.936007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.936038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.936072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.940529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.940560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.940577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.944707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.944744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.944761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.949253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.949284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.949301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.953859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.953889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.953906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.958296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.958333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.958350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.962766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.962796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.962814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.967230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.967258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.967291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.971627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.971656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.971673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.976174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.976203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.976234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.980608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.980638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.980655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.332 [2024-11-25 13:27:24.985364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.332 [2024-11-25 13:27:24.985397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.332 [2024-11-25 13:27:24.985415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:24.990066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:24.990098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.591 [2024-11-25 13:27:24.990115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:24.994585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:24.994617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.591 [2024-11-25 13:27:24.994636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:24.998851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:24.998883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.591 [2024-11-25 13:27:24.998900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:25.001900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:25.001930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.591 [2024-11-25 13:27:25.001963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:25.006605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:25.006650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.591 [2024-11-25 13:27:25.006668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:25.011734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:25.011764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.591 [2024-11-25 13:27:25.011796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:25.017951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:25.017982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.591 [2024-11-25 13:27:25.018016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:25.024363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:25.024395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.591 [2024-11-25 13:27:25.024418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.591 [2024-11-25 13:27:25.031023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.591 [2024-11-25 13:27:25.031051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.031082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.037160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.037192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.037209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.042493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.042525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.042542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.048676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.048721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.048738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.054007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.054039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.054056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.057861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.057890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.057923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.062846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.062878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.062896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.068341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.068370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.068402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.073888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.073926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.073959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.078606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.078636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.078668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.083130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.083161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.083178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.087722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.087751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.087767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.092532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.092561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.092578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.097201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.097231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.097264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.102558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.102608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.102625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.108472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.108503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.108520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.113694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.113724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.113757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.119893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.119937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.119954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.125620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.125652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.125670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.131577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.131623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.131641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.137479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.137524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.137541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.142581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.142611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.142628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.147794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.147838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.147854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.153644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.153674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.153707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.160343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.160375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.160392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.168034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.168066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.168089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.174351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.174382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.592 [2024-11-25 13:27:25.174400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.592 [2024-11-25 13:27:25.180011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.592 [2024-11-25 13:27:25.180042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.180059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.184907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.184937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.184954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.189428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.189457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.189474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.194143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.194172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.194190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.198999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.199029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.199046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.203718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.203749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.203766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.209498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.209529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.209546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.214702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.214732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.214749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.218821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.218850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.218882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.225212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.225241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.225274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.232040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.232084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.232102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.238802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.238833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.238866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.593 [2024-11-25 13:27:25.246701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.593 [2024-11-25 13:27:25.246734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.593 [2024-11-25 13:27:25.246751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.852 [2024-11-25 13:27:25.254631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.852 [2024-11-25 13:27:25.254677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.852 [2024-11-25 13:27:25.254693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.852 [2024-11-25 13:27:25.261130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.852 [2024-11-25 13:27:25.261161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.852 [2024-11-25 13:27:25.261194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.852 [2024-11-25 13:27:25.266069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.852 [2024-11-25 13:27:25.266114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.852 [2024-11-25 13:27:25.266137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.852 [2024-11-25 13:27:25.271015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.852 [2024-11-25 13:27:25.271045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.852 [2024-11-25 13:27:25.271062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.852 [2024-11-25 13:27:25.276407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.852 [2024-11-25 13:27:25.276438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.852 [2024-11-25 13:27:25.276456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.852 [2024-11-25 13:27:25.281594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.852 [2024-11-25 13:27:25.281624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.852 [2024-11-25 13:27:25.281642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.852 [2024-11-25 13:27:25.286702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.852 [2024-11-25 13:27:25.286732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.852 [2024-11-25 13:27:25.286764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.289762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.289791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.289823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.295399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.295430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.295447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.300167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.300212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.300228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.305394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.305425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.305442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.311104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.311139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.311173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.316417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.316448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.316465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.853 5342.00 IOPS, 667.75 MiB/s [2024-11-25T12:27:25.512Z] [2024-11-25 13:27:25.323057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.323102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.323120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.328218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.328249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.328266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.333821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.333853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.333870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.341341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.341373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.341390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.347579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.347609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.347626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.353089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.353120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.353137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.358323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.358353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.358371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.363537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.363568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.363585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.369986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.370016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.370033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.377310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.377341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.377359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.383686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.383717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.383735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.391954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.392000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.392017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.399763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.399808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.399825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.407413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.407445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.407463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.415082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.415114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.415131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.422841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.422873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.422897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.429444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.429476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.429493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.434594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.434625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.434642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.439632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.439663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.439680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.444632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.444663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.853 [2024-11-25 13:27:25.444680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.853 [2024-11-25 13:27:25.450251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.853 [2024-11-25 13:27:25.450300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.450348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.854 [2024-11-25 13:27:25.457261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.854 [2024-11-25 13:27:25.457292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.457317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.854 [2024-11-25 13:27:25.464391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.854 [2024-11-25 13:27:25.464422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.464439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.854 [2024-11-25 13:27:25.471692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.854 [2024-11-25 13:27:25.471723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.471740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.854 [2024-11-25 13:27:25.478637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.854 [2024-11-25 13:27:25.478669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.478687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:27.854 [2024-11-25 13:27:25.484203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.854 [2024-11-25 13:27:25.484231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.484263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:27.854 [2024-11-25 13:27:25.491099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.854 [2024-11-25 13:27:25.491131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.491149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:27.854 [2024-11-25 13:27:25.498371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.854 [2024-11-25 13:27:25.498403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.498421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:27.854 [2024-11-25 13:27:25.506299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:27.854 [2024-11-25 13:27:25.506354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.854 [2024-11-25 13:27:25.506371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.514579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.514624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.514641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.522767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.522811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.522828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.531045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.531076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.531110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.539298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.539335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.539373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.546888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.546919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.546937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.555128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.555158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.555190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.563245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.563291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.563315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.571249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.571278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.571319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.579210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.579239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.579254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.586624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.586655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.586671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.591798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.591828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.591861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.596316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.596359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.596376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.600846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.600879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.600912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.605564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.605610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.605626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.610377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.610409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.610427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.615218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.615248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.615281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.620764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.620793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.620824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.626685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.626732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.626750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.631970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.632017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.632034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.637571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.637617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.637635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.643429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.643461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.643478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.648896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.648928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.648945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.654859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.114 [2024-11-25 13:27:25.654891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.114 [2024-11-25 13:27:25.654908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.114 [2024-11-25 13:27:25.660518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.660550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.660567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.665562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.665592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.665624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.670380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.670410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.670426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.675383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.675413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.675446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.680557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.680588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.680605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.685364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.685395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.685413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.689745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.689775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.689798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.694244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.694274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.694291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.699143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.699175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.699192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.704414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.704446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.704463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.711417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.711448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.711466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.716861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.716892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.716909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.722130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.722160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.722178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.727127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.727158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.727174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.731961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.731992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.732010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.737290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.737351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.737370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.742401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.742433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.742450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.745801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.745832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.745865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.751313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.751344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.751361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.757264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.757317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.757337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.763279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.763333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.763352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.115 [2024-11-25 13:27:25.769171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.115 [2024-11-25 13:27:25.769204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.115 [2024-11-25 13:27:25.769222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.774913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.774968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.774999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.781095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.781125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.781157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.786574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.786619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.786634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.792177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.792207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.792241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.797191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.797222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.797240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.801747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.801775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.801808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.806441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.806470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.806502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.810879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.810906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.810938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.815451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.815480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.815497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.819927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.819954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.819985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.824613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.824649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.824666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.829184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.829212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.829244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.833551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.833595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.833612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.838022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.838065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.838081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.842657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.842686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.842704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.847204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.847232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.847264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.851791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.375 [2024-11-25 13:27:25.851821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.375 [2024-11-25 13:27:25.851854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.375 [2024-11-25 13:27:25.856804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.856833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.856864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.862946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.862989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.863005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.870687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.870717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.870750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.878065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.878093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.878125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.885656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.885700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.885715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.893358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.893388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.893420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.900964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.900993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.901025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.909440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.909485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.909502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.916718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.916762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.916779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.924870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.924901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.924933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.932318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.932349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.932373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.939828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.939860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.939892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.947513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.947544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.947562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.955368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.955413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.955431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.962709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.962754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.962773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.969847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.969878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.969896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.977116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.977162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.977179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.983858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.983889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.983921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.989165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.989195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.989212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.994333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.994369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.994387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:25.999719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:25.999764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:25.999781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:26.004840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:26.004870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:26.004903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:26.009410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:26.009441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:26.009458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:26.013978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:26.014008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:26.014025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:26.018920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:26.018951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:26.018967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:26.024361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:26.024393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:26.024410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.376 [2024-11-25 13:27:26.028898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.376 [2024-11-25 13:27:26.028930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.376 [2024-11-25 13:27:26.028948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.635 [2024-11-25 13:27:26.032684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.635 [2024-11-25 13:27:26.032717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.635 [2024-11-25 13:27:26.032735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.635 [2024-11-25 13:27:26.040259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.635 [2024-11-25 13:27:26.040313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.635 [2024-11-25 13:27:26.040334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.635 [2024-11-25 13:27:26.046159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.635 [2024-11-25 13:27:26.046204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.635 [2024-11-25 13:27:26.046222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.635 [2024-11-25 13:27:26.052374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.635 [2024-11-25 13:27:26.052406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.635 [2024-11-25 13:27:26.052425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.635 [2024-11-25 13:27:26.058218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.635 [2024-11-25 13:27:26.058248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.635 [2024-11-25 13:27:26.058280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.635 [2024-11-25 13:27:26.064341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.635 [2024-11-25 13:27:26.064372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.064405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.070525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.070557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.070575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.075628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.075659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.075689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.081582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.081637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.081654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.087392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.087424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.087448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.092118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.092150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.092167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.095842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.095872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.095889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.099093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.099123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.099141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.104240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.104270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.104311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.110164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.110194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.110211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.115378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.115424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.115441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.120701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.120730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.120761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.126327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.126374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.126392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.133208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.133256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.133273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.139460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.139506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.139523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.145654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.145699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.145715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.151879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.151911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.151928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.157598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.157644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.157661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.163372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.163402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.163434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.169007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.169038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.169055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.175049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.175095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.175113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.180782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.180829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.180852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.186996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.187042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.187059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.192752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.192783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.192800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.199612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.199658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.199674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.205886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.205917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.205949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.212042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.212073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.636 [2024-11-25 13:27:26.212090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.636 [2024-11-25 13:27:26.217558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.636 [2024-11-25 13:27:26.217589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.217607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.222372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.222402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.222419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.227333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.227378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.227396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.232431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.232468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.232486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.237300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.237354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.237372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.242281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.242319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.242338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.247734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.247765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.247782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.253830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.253862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.253878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.259902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.259934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.259951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.265582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.265614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.265631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.269390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.269422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.269440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.273034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.273062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.273094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.277618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.277645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.277660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.282537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.282568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.282585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.637 [2024-11-25 13:27:26.287594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.637 [2024-11-25 13:27:26.287623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.637 [2024-11-25 13:27:26.287639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.895 [2024-11-25 13:27:26.293068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.895 [2024-11-25 13:27:26.293102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.895 [2024-11-25 13:27:26.293120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.895 [2024-11-25 13:27:26.299208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.895 [2024-11-25 13:27:26.299238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.895 [2024-11-25 13:27:26.299271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.895 [2024-11-25 13:27:26.304485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.895 [2024-11-25 13:27:26.304517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.895 [2024-11-25 13:27:26.304534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.895 [2024-11-25 13:27:26.309588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.895 [2024-11-25 13:27:26.309618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.895 [2024-11-25 13:27:26.309649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:28.895 [2024-11-25 13:27:26.314753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.895 [2024-11-25 13:27:26.314797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.895 [2024-11-25 13:27:26.314813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:28.895 [2024-11-25 13:27:26.320270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.895 [2024-11-25 13:27:26.320300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.895 [2024-11-25 13:27:26.320349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:28.895 5318.00 IOPS, 664.75 MiB/s [2024-11-25T12:27:26.554Z] [2024-11-25 13:27:26.326458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c47180) 00:28:28.895 [2024-11-25 13:27:26.326488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.895 [2024-11-25 13:27:26.326505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:28.895 00:28:28.895 Latency(us) 00:28:28.895 [2024-11-25T12:27:26.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.895 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:28.895 nvme0n1 : 2.00 5316.99 664.62 0.00 0.00 3004.38 640.19 9757.58 00:28:28.895 [2024-11-25T12:27:26.554Z] =================================================================================================================== 00:28:28.895 [2024-11-25T12:27:26.554Z] Total : 5316.99 664.62 0.00 0.00 3004.38 640.19 9757.58 00:28:28.895 { 00:28:28.895 "results": [ 00:28:28.895 { 00:28:28.895 "job": "nvme0n1", 00:28:28.895 "core_mask": "0x2", 00:28:28.895 "workload": "randread", 00:28:28.895 "status": "finished", 00:28:28.895 "queue_depth": 16, 00:28:28.895 "io_size": 131072, 00:28:28.895 "runtime": 2.003391, 00:28:28.895 "iops": 5316.985051844597, 00:28:28.895 "mibps": 664.6231314805747, 00:28:28.895 "io_failed": 0, 00:28:28.895 "io_timeout": 0, 00:28:28.895 "avg_latency_us": 3004.3752114713284, 00:28:28.895 "min_latency_us": 640.1896296296296, 00:28:28.895 "max_latency_us": 9757.582222222221 00:28:28.895 } 00:28:28.895 ], 00:28:28.895 "core_count": 1 00:28:28.895 } 00:28:28.895 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.895 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.895 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:28.895 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.895 | .driver_specific 00:28:28.895 | .nvme_error 00:28:28.895 | .status_code 00:28:28.895 | .command_transient_transport_error' 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 344 > 0 )) 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3279452 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3279452 ']' 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3279452 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3279452 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3279452' 00:28:29.153 killing process with pid 3279452 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3279452 00:28:29.153 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.153 00:28:29.153 Latency(us) 00:28:29.153 [2024-11-25T12:27:26.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.153 [2024-11-25T12:27:26.812Z] =================================================================================================================== 00:28:29.153 [2024-11-25T12:27:26.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.153 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3279452 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3279940 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3279940 /var/tmp/bperf.sock 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3279940 ']' 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.411 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.412 13:27:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.412 [2024-11-25 13:27:26.917243] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:29.412 [2024-11-25 13:27:26.917356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279940 ] 00:28:29.412 [2024-11-25 13:27:26.982394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.412 [2024-11-25 13:27:27.040001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.670 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.670 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:29.670 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.670 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.927 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:29.927 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.927 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.927 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.927 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.927 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.492 nvme0n1 00:28:30.492 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:30.492 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.492 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.492 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.492 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:30.492 13:27:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.492 Running I/O for 2 seconds... 00:28:30.492 [2024-11-25 13:27:28.096157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eff18 00:28:30.492 [2024-11-25 13:27:28.097485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.492 [2024-11-25 13:27:28.097528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:30.492 [2024-11-25 13:27:28.108271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e9e10 00:28:30.492 [2024-11-25 13:27:28.109708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.492 [2024-11-25 13:27:28.109738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:30.492 [2024-11-25 13:27:28.119864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166edd58 00:28:30.492 [2024-11-25 13:27:28.121105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.492 [2024-11-25 13:27:28.121151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:30.492 [2024-11-25 13:27:28.131830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e27f0 00:28:30.492 [2024-11-25 13:27:28.133069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.492 [2024-11-25 13:27:28.133103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:30.492 [2024-11-25 13:27:28.145717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ef270 00:28:30.492 [2024-11-25 13:27:28.147722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.492 [2024-11-25 13:27:28.147752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:30.751 [2024-11-25 13:27:28.154389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e4578 00:28:30.751 [2024-11-25 13:27:28.155206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-25 13:27:28.155242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:30.751 [2024-11-25 13:27:28.166120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e49b0 00:28:30.751 [2024-11-25 13:27:28.167088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-25 13:27:28.167147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:30.751 [2024-11-25 13:27:28.178063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7970 00:28:30.751 [2024-11-25 13:27:28.179048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-25 13:27:28.179082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:30.751 [2024-11-25 13:27:28.189958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166df118 00:28:30.751 [2024-11-25 13:27:28.190734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-25 13:27:28.190765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:30.751 [2024-11-25 13:27:28.203976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eb760 00:28:30.751 [2024-11-25 13:27:28.205687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-25 13:27:28.205717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:30.751 [2024-11-25 13:27:28.215883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166dfdc0 00:28:30.751 [2024-11-25 13:27:28.217551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-25 13:27:28.217605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:30.751 [2024-11-25 13:27:28.224450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:30.751 [2024-11-25 13:27:28.225428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-25 13:27:28.225464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:30.751 [2024-11-25 13:27:28.236270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7538 00:28:30.751 [2024-11-25 13:27:28.237257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.751 [2024-11-25 13:27:28.237292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.247921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e9e10 00:28:30.752 [2024-11-25 13:27:28.248607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.248638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.259959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e8d30 00:28:30.752 [2024-11-25 13:27:28.260695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.260744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.271196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166efae0 00:28:30.752 [2024-11-25 13:27:28.272343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.272373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.282963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e95a0 00:28:30.752 [2024-11-25 13:27:28.283861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.283897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.297930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eb328 00:28:30.752 [2024-11-25 13:27:28.299679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.299721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.309297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f0350 00:28:30.752 [2024-11-25 13:27:28.311075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.311117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.317729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fd640 00:28:30.752 [2024-11-25 13:27:28.318460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.318491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.330107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e6b70 00:28:30.752 [2024-11-25 13:27:28.330980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.331007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.342440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e1710 00:28:30.752 [2024-11-25 13:27:28.343456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.343485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.354744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ee190 00:28:30.752 [2024-11-25 13:27:28.356170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.356205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.366852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f1430 00:28:30.752 [2024-11-25 13:27:28.368141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.368169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.377119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e6b70 00:28:30.752 [2024-11-25 13:27:28.377908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.377937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.387782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f9f68 00:28:30.752 [2024-11-25 13:27:28.388502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.388531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:30.752 [2024-11-25 13:27:28.400081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e1710 00:28:30.752 [2024-11-25 13:27:28.400965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.752 [2024-11-25 13:27:28.401009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:31.010 [2024-11-25 13:27:28.414826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e73e0 00:28:31.010 [2024-11-25 13:27:28.416272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.010 [2024-11-25 13:27:28.416301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:31.010 [2024-11-25 13:27:28.427213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f5378 00:28:31.010 [2024-11-25 13:27:28.428877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.010 [2024-11-25 13:27:28.428919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:31.010 [2024-11-25 13:27:28.439191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7970 00:28:31.010 [2024-11-25 13:27:28.440821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.010 [2024-11-25 13:27:28.440849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.010 [2024-11-25 13:27:28.448717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e88f8 00:28:31.010 [2024-11-25 13:27:28.449903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.010 [2024-11-25 13:27:28.449936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:31.010 [2024-11-25 13:27:28.461078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7da8 00:28:31.010 [2024-11-25 13:27:28.462405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.010 [2024-11-25 13:27:28.462447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:31.010 [2024-11-25 13:27:28.473401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e5658 00:28:31.011 [2024-11-25 13:27:28.474893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.474943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.485716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ed4e8 00:28:31.011 [2024-11-25 13:27:28.487285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.487333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.497872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f57b0 00:28:31.011 [2024-11-25 13:27:28.499591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.499620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.506087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166edd58 00:28:31.011 [2024-11-25 13:27:28.506990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.507031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.520376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e1710 00:28:31.011 [2024-11-25 13:27:28.521749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.521791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.530289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e0a68 00:28:31.011 [2024-11-25 13:27:28.530924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.530952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.543617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e4578 00:28:31.011 [2024-11-25 13:27:28.544950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.544991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.553500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f4298 00:28:31.011 [2024-11-25 13:27:28.554263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.554314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.564410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e6300 00:28:31.011 [2024-11-25 13:27:28.565183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.565210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.576378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166de470 00:28:31.011 [2024-11-25 13:27:28.577150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.577194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.588816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e8088 00:28:31.011 [2024-11-25 13:27:28.589576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.589620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.601073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fd640 00:28:31.011 [2024-11-25 13:27:28.601997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.602038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.613436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e99d8 00:28:31.011 [2024-11-25 13:27:28.614606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.614637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.624867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f4298 00:28:31.011 [2024-11-25 13:27:28.625941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.625968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.637384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f3e60 00:28:31.011 [2024-11-25 13:27:28.638610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.638637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.649842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ebfd0 00:28:31.011 [2024-11-25 13:27:28.651161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.651203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.011 [2024-11-25 13:27:28.661688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e73e0 00:28:31.011 [2024-11-25 13:27:28.662635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.011 [2024-11-25 13:27:28.662664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.675725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e99d8 00:28:31.270 [2024-11-25 13:27:28.677535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.677579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.684161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f6890 00:28:31.270 [2024-11-25 13:27:28.684952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.684994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.697462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ee190 00:28:31.270 [2024-11-25 13:27:28.698440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.698470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.708344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7da8 00:28:31.270 [2024-11-25 13:27:28.710096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.710130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.718381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ebb98 00:28:31.270 [2024-11-25 13:27:28.719130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.719157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.731632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ec408 00:28:31.270 [2024-11-25 13:27:28.732592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.732636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.742579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166de8a8 00:28:31.270 [2024-11-25 13:27:28.743481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.743511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.754521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f46d0 00:28:31.270 [2024-11-25 13:27:28.755437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.755467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.766736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e0630 00:28:31.270 [2024-11-25 13:27:28.767640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.767681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.779009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e2c28 00:28:31.270 [2024-11-25 13:27:28.780114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.270 [2024-11-25 13:27:28.780162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:31.270 [2024-11-25 13:27:28.790675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e3060 00:28:31.271 [2024-11-25 13:27:28.791839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.791866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.803017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eff18 00:28:31.271 [2024-11-25 13:27:28.804325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.804367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.815311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e9168 00:28:31.271 [2024-11-25 13:27:28.816811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.816838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.827523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e6fa8 00:28:31.271 [2024-11-25 13:27:28.829155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.829182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.839786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f35f0 00:28:31.271 [2024-11-25 13:27:28.841583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.841611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.848023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fc560 00:28:31.271 [2024-11-25 13:27:28.848817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.848843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.859186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e5ec8 00:28:31.271 [2024-11-25 13:27:28.859949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.859976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.871061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eb760 00:28:31.271 [2024-11-25 13:27:28.871907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.871953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.885164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7538 00:28:31.271 [2024-11-25 13:27:28.886410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.886445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.897479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e49b0 00:28:31.271 [2024-11-25 13:27:28.898809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.898837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.906149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f8618 00:28:31.271 [2024-11-25 13:27:28.906898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.906925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:31.271 [2024-11-25 13:27:28.918126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f4b08 00:28:31.271 [2024-11-25 13:27:28.918876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.271 [2024-11-25 13:27:28.918904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:28.932454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f6890 00:28:31.530 [2024-11-25 13:27:28.933688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:28.933731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:28.944745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e6300 00:28:31.530 [2024-11-25 13:27:28.946181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:28.946224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:28.956996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7970 00:28:31.530 [2024-11-25 13:27:28.958595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:28.958638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:28.969041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e38d0 00:28:31.530 [2024-11-25 13:27:28.970671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:28.970704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:28.977062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eff18 00:28:31.530 [2024-11-25 13:27:28.977836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:28.977881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:28.989880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fac10 00:28:31.530 [2024-11-25 13:27:28.990509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:28.990546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:29.002155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e3060 00:28:31.530 [2024-11-25 13:27:29.002962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:29.002996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:29.014347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166df118 00:28:31.530 [2024-11-25 13:27:29.015314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:29.015364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:29.025825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f92c0 00:28:31.530 [2024-11-25 13:27:29.027209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:29.027240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:29.037450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ec408 00:28:31.530 [2024-11-25 13:27:29.038656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:29.038706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:29.050006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7970 00:28:31.530 [2024-11-25 13:27:29.051367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.530 [2024-11-25 13:27:29.051401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:31.530 [2024-11-25 13:27:29.062618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e4578 00:28:31.530 [2024-11-25 13:27:29.064192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.064220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.075127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e0a68 00:28:31.531 [2024-11-25 13:27:29.076809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.076837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.085765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fa7d8 00:28:31.531 21578.00 IOPS, 84.29 MiB/s [2024-11-25T12:27:29.190Z] [2024-11-25 13:27:29.087231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.087269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.097735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e88f8 00:28:31.531 [2024-11-25 13:27:29.098949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.098977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.110045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaab8 00:28:31.531 [2024-11-25 13:27:29.111458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.111491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.121145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ed0b0 00:28:31.531 [2024-11-25 13:27:29.122397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.122427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.133700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166edd58 00:28:31.531 [2024-11-25 13:27:29.135034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.135082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.145502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f8e88 00:28:31.531 [2024-11-25 13:27:29.146977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.147026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.157386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f2510 00:28:31.531 [2024-11-25 13:27:29.158824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.158874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.165935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ed920 00:28:31.531 [2024-11-25 13:27:29.166667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.166698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:31.531 [2024-11-25 13:27:29.179663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ef270 00:28:31.531 [2024-11-25 13:27:29.180815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.531 [2024-11-25 13:27:29.180864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:31.788 [2024-11-25 13:27:29.191162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f4298 00:28:31.788 [2024-11-25 13:27:29.192315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.788 [2024-11-25 13:27:29.192361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:31.788 [2024-11-25 13:27:29.203047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f8618 00:28:31.788 [2024-11-25 13:27:29.203789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.788 [2024-11-25 13:27:29.203822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:31.788 [2024-11-25 13:27:29.216770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f6020 00:28:31.788 [2024-11-25 13:27:29.218131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.788 [2024-11-25 13:27:29.218181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:31.788 [2024-11-25 13:27:29.228565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f9f68 00:28:31.789 [2024-11-25 13:27:29.230029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.230071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.240846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e6b70 00:28:31.789 [2024-11-25 13:27:29.242495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.242539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.250122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e27f0 00:28:31.789 [2024-11-25 13:27:29.251195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.251244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.262083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e7818 00:28:31.789 [2024-11-25 13:27:29.262758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.262792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.274528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e9168 00:28:31.789 [2024-11-25 13:27:29.275410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.275446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.286372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166dece0 00:28:31.789 [2024-11-25 13:27:29.287622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.287652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.297772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fe2e8 00:28:31.789 [2024-11-25 13:27:29.298578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.298622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.312453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7970 00:28:31.789 [2024-11-25 13:27:29.314211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.314255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.320867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e8088 00:28:31.789 [2024-11-25 13:27:29.321745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.321787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.333085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e0a68 00:28:31.789 [2024-11-25 13:27:29.333980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.334009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.347122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaab8 00:28:31.789 [2024-11-25 13:27:29.348588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.348617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.359593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e8d30 00:28:31.789 [2024-11-25 13:27:29.361213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.361259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.368113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ec840 00:28:31.789 [2024-11-25 13:27:29.368900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.368945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.382462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166df550 00:28:31.789 [2024-11-25 13:27:29.383760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.383807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.395157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ec840 00:28:31.789 [2024-11-25 13:27:29.396543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.396590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.405050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f35f0 00:28:31.789 [2024-11-25 13:27:29.405818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.405864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.417273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e1f80 00:28:31.789 [2024-11-25 13:27:29.417872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.417909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.432152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f0ff8 00:28:31.789 [2024-11-25 13:27:29.433953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.433998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:31.789 [2024-11-25 13:27:29.440669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ed4e8 00:28:31.789 [2024-11-25 13:27:29.441482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.789 [2024-11-25 13:27:29.441528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.452970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f4b08 00:28:32.048 [2024-11-25 13:27:29.453777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.453826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.464166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f5378 00:28:32.048 [2024-11-25 13:27:29.464930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.464973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.476724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fc560 00:28:32.048 [2024-11-25 13:27:29.477616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.477666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.490467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e5a90 00:28:32.048 [2024-11-25 13:27:29.491816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.491854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.501953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ef6a8 00:28:32.048 [2024-11-25 13:27:29.503670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.503701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.512077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f7da8 00:28:32.048 [2024-11-25 13:27:29.512923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.512954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.524567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166de038 00:28:32.048 [2024-11-25 13:27:29.525579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.525610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.537582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ec408 00:28:32.048 [2024-11-25 13:27:29.538459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.538493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.548973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e5a90 00:28:32.048 [2024-11-25 13:27:29.549744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.549780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.563051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166efae0 00:28:32.048 [2024-11-25 13:27:29.564789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.564832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.575547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e5a90 00:28:32.048 [2024-11-25 13:27:29.577403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.577448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.584030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e88f8 00:28:32.048 [2024-11-25 13:27:29.584889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.584918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.595399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f46d0 00:28:32.048 [2024-11-25 13:27:29.596231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.596274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.608010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fe2e8 00:28:32.048 [2024-11-25 13:27:29.608990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.609034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.622670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eb760 00:28:32.048 [2024-11-25 13:27:29.624130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.624174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.634510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fc128 00:28:32.048 [2024-11-25 13:27:29.636239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.636269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.645483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e5a90 00:28:32.048 [2024-11-25 13:27:29.647354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.647393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.655846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e4de8 00:28:32.048 [2024-11-25 13:27:29.656690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.656720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.668569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ee190 00:28:32.048 [2024-11-25 13:27:29.669594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.669624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.680760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f20d8 00:28:32.048 [2024-11-25 13:27:29.681765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.681793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:32.048 [2024-11-25 13:27:29.693204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eb328 00:28:32.048 [2024-11-25 13:27:29.694191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.048 [2024-11-25 13:27:29.694219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:32.306 [2024-11-25 13:27:29.705733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e9e10 00:28:32.306 [2024-11-25 13:27:29.706977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.707029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.717655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f57b0 00:28:32.307 [2024-11-25 13:27:29.718983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.719027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.730145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e3d08 00:28:32.307 [2024-11-25 13:27:29.731594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.731624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.742652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.744242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.744286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.754762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f8a50 00:28:32.307 [2024-11-25 13:27:29.756342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.756386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.765528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166edd58 00:28:32.307 [2024-11-25 13:27:29.766901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.766931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.777547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166ff3c8 00:28:32.307 [2024-11-25 13:27:29.778744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.778789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.789359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f6890 00:28:32.307 [2024-11-25 13:27:29.790706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.790749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.801426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f20d8 00:28:32.307 [2024-11-25 13:27:29.802797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.802828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.813189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166f6890 00:28:32.307 [2024-11-25 13:27:29.814517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.814566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.825416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166fb480 00:28:32.307 [2024-11-25 13:27:29.826781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.826830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.836995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166e6b70 00:28:32.307 [2024-11-25 13:27:29.838315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.838366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.849656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.849878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.849915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.863094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.863350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.863380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.876893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.877141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.877190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.890606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.890862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.890892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.904089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.904340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.904371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.917898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.918145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.918177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.931662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.931909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.931940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.945198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.945475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.945509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.307 [2024-11-25 13:27:29.958905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.307 [2024-11-25 13:27:29.959151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.307 [2024-11-25 13:27:29.959181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:29.972504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:29.972753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:29.972786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:29.986429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:29.986711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:29.986761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:30.000229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:30.000464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:30.000497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:30.013734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:30.013953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:30.013986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:30.027456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:30.027705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:30.027752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:30.041390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:30.041626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:30.041670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:30.055119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:30.055391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:30.055425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:30.069058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:30.069334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:30.069368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 [2024-11-25 13:27:30.082854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6440) with pdu=0x2000166eaef0 00:28:32.566 [2024-11-25 13:27:30.083106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.566 [2024-11-25 13:27:30.083138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:32.566 21189.00 IOPS, 82.77 MiB/s 00:28:32.566 Latency(us) 00:28:32.566 [2024-11-25T12:27:30.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.566 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.566 nvme0n1 : 2.01 21180.86 82.74 0.00 0.00 6029.25 2730.67 15825.73 00:28:32.566 [2024-11-25T12:27:30.225Z] =================================================================================================================== 00:28:32.566 [2024-11-25T12:27:30.225Z] Total : 21180.86 82.74 0.00 0.00 6029.25 2730.67 15825.73 00:28:32.566 { 00:28:32.566 "results": [ 00:28:32.566 { 00:28:32.566 "job": "nvme0n1", 00:28:32.566 "core_mask": "0x2", 00:28:32.566 "workload": "randwrite", 00:28:32.566 "status": "finished", 00:28:32.566 "queue_depth": 128, 00:28:32.566 "io_size": 4096, 00:28:32.566 "runtime": 2.008323, 00:28:32.566 "iops": 21180.855868304054, 00:28:32.566 "mibps": 82.73771823556271, 00:28:32.566 "io_failed": 0, 00:28:32.566 "io_timeout": 0, 00:28:32.566 "avg_latency_us": 6029.249832080423, 00:28:32.566 "min_latency_us": 2730.6666666666665, 00:28:32.566 "max_latency_us": 15825.730370370371 00:28:32.566 } 00:28:32.566 ], 00:28:32.566 "core_count": 1 00:28:32.566 } 00:28:32.566 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:32.566 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:32.566 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:32.566 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:32.566 | .driver_specific 00:28:32.566 | .nvme_error 00:28:32.566 | .status_code 00:28:32.566 | .command_transient_transport_error' 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 166 > 0 )) 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3279940 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3279940 ']' 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3279940 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3279940 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3279940' 00:28:32.825 killing process with pid 3279940 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3279940 00:28:32.825 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.825 00:28:32.825 Latency(us) 00:28:32.825 [2024-11-25T12:27:30.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.825 [2024-11-25T12:27:30.484Z] =================================================================================================================== 00:28:32.825 [2024-11-25T12:27:30.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.825 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3279940 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3280385 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3280385 /var/tmp/bperf.sock 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3280385 ']' 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.083 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.083 [2024-11-25 13:27:30.717971] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:33.083 [2024-11-25 13:27:30.718060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280385 ] 00:28:33.083 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.083 Zero copy mechanism will not be used. 00:28:33.341 [2024-11-25 13:27:30.788843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.341 [2024-11-25 13:27:30.850235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.341 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.341 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:33.342 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.342 13:27:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.599 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:33.599 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.599 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.856 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.856 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.856 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.115 nvme0n1 00:28:34.115 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:34.115 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.115 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.115 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.115 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.115 13:27:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:34.115 Zero copy mechanism will not be used. 00:28:34.115 Running I/O for 2 seconds... 00:28:34.115 [2024-11-25 13:27:31.728486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.115 [2024-11-25 13:27:31.728811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.115 [2024-11-25 13:27:31.728856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.115 [2024-11-25 13:27:31.735288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.115 [2024-11-25 13:27:31.735439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.115 [2024-11-25 13:27:31.735480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.115 [2024-11-25 13:27:31.741623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.115 [2024-11-25 13:27:31.741771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.115 [2024-11-25 13:27:31.741810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.115 [2024-11-25 13:27:31.747705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.115 [2024-11-25 13:27:31.747811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.115 [2024-11-25 13:27:31.747850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.115 [2024-11-25 13:27:31.753444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.115 [2024-11-25 13:27:31.753608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.115 [2024-11-25 13:27:31.753640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.115 [2024-11-25 13:27:31.759115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.115 [2024-11-25 13:27:31.759435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.115 [2024-11-25 13:27:31.759476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.115 [2024-11-25 13:27:31.765056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.115 [2024-11-25 13:27:31.765395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.115 [2024-11-25 13:27:31.765435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.115 [2024-11-25 13:27:31.770828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.115 [2024-11-25 13:27:31.771234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.115 [2024-11-25 13:27:31.771273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.776543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.776884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.776925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.782183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.782493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.782525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.787650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.787864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.787895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.793157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.793514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.793550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.798656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.798941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.798973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.804212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.804549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.804587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.809751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.810025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.810056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.815354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.815600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.815640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.821083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.821466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.821500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.826338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.826617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.826656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.831661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.831907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.831939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.838120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.838435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.375 [2024-11-25 13:27:31.838466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.375 [2024-11-25 13:27:31.843422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.375 [2024-11-25 13:27:31.843702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.843740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.848067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.848295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.848342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.852754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.853017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.853051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.857551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.857776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.857811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.862369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.862604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.862641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.867034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.867283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.867325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.872136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.872469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.872504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.877726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.877960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.877991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.882229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.882489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.882529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.887410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.887694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.887739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.892594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.892900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.892940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.897582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.897852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.897892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.902768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.903060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.903101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.907950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.908262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.908311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.913036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.913354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.913394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.918121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.918440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.918478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.923293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.923651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.923691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.928274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.928576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.928617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.933482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.933794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.933834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.938518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.938740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.938780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.943784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.944080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.944120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.948885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.949173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.949213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.954049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.954373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.954414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.958999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.959319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.959361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.964122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.964418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.964457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.969332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.969578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.969618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.974455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.376 [2024-11-25 13:27:31.974701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.376 [2024-11-25 13:27:31.974739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.376 [2024-11-25 13:27:31.979578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:31.979904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:31.979949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:31.984676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:31.984975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:31.985016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:31.989827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:31.990083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:31.990120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:31.994982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:31.995240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:31.995280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:32.000028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:32.000361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:32.000399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:32.004996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:32.005301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:32.005349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:32.010164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:32.010502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:32.010545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:32.015174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:32.015474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:32.015513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:32.020289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:32.020504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:32.020544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:32.025330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:32.025603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:32.025643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.377 [2024-11-25 13:27:32.030479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.377 [2024-11-25 13:27:32.030851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.377 [2024-11-25 13:27:32.030888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.636 [2024-11-25 13:27:32.035482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.636 [2024-11-25 13:27:32.035663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.636 [2024-11-25 13:27:32.035703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.636 [2024-11-25 13:27:32.040457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.636 [2024-11-25 13:27:32.040614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.636 [2024-11-25 13:27:32.040655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.636 [2024-11-25 13:27:32.045601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.636 [2024-11-25 13:27:32.045804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.636 [2024-11-25 13:27:32.045845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.636 [2024-11-25 13:27:32.050603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.636 [2024-11-25 13:27:32.050777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.636 [2024-11-25 13:27:32.050817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.636 [2024-11-25 13:27:32.055821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.636 [2024-11-25 13:27:32.055989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.636 [2024-11-25 13:27:32.056029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.636 [2024-11-25 13:27:32.060977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.636 [2024-11-25 13:27:32.061191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.636 [2024-11-25 13:27:32.061230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.636 [2024-11-25 13:27:32.065980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.636 [2024-11-25 13:27:32.066144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.636 [2024-11-25 13:27:32.066183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.636 [2024-11-25 13:27:32.071010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.636 [2024-11-25 13:27:32.071224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.636 [2024-11-25 13:27:32.071264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.076122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.076312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.076352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.081168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.081386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.081425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.086179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.086340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.086381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.091246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.091421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.091460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.096328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.096505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.096545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.101297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.101458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.101497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.106455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.106647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.106687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.111447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.111612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.111659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.116477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.116680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.116717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.121602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.121765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.121803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.126654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.126850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.126889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.131760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.131952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.131992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.136711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.136886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.136925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.141775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.141974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.142014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.146823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.147034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.147074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.151891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.152007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.152043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.157043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.157211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.157250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.162130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.162296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.162342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.167286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.167496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.167535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.172263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.172455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.172495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.177378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.177522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.177561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.182585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.182743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.182783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.187636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.187826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.187865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.192688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.192899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.192939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.197673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.197843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.197882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.202766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.202936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.637 [2024-11-25 13:27:32.202974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.637 [2024-11-25 13:27:32.207865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.637 [2024-11-25 13:27:32.208018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.208057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.212874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.213096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.213136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.217966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.218172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.218211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.222973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.223135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.223175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.228034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.228207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.228247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.233196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.233418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.233457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.238448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.238615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.238647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.244553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.244700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.244741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.250409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.250505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.250540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.256582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.256742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.256773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.262478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.262592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.262625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.267623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.267771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.267802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.273211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.273299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.273343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.277737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.277806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.277843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.282216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.282340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.282376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.286907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.287037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.287067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.638 [2024-11-25 13:27:32.291491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.638 [2024-11-25 13:27:32.291590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.638 [2024-11-25 13:27:32.291630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.296289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.296380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.296417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.300947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.301033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.301070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.305462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.305547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.305583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.310570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.310698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.310734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.315723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.315856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.315886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.321006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.321103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.321137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.325402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.325493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.325528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.329643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.329723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.329758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.333867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.333936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.333964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.338134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.338250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.338288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.342405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.342488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.342520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.346861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.346934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.346965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.351134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.351213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.351242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.355467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.355563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.355602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.359687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.359770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.359807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.364015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.364175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.364214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.368677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.368750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.368786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.373426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.373605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.373643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.378487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.378705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.378736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.383958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.384165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.384204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.388931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.389122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.389160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.393288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.393432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.393470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.397648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.397753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.397787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.401823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.401981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.402020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.897 [2024-11-25 13:27:32.406485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.897 [2024-11-25 13:27:32.406610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.897 [2024-11-25 13:27:32.406646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.411101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.411274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.411318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.416554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.416624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.416660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.420895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.420968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.420997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.425512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.425595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.425630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.429943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.430029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.430057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.434562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.434671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.434701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.439035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.439135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.439166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.443677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.443799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.443836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.448045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.448124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.448157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.452388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.452507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.452548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.456935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.457017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.457055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.461631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.461777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.461818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.466205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.466351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.466390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.470804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.470876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.470913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.475553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.475673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.475707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.480195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.480286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.480331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.484696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.484782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.484821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.489034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.489168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.489210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.493696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.493796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.493824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.498194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.498284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.498322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.502616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.502705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.502740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.506859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.506979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.507014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.511409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.511485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.511519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.516088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.516218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.516258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.520776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.520872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.520911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.525177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.525300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.525338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.898 [2024-11-25 13:27:32.529822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.898 [2024-11-25 13:27:32.529928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.898 [2024-11-25 13:27:32.529967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.899 [2024-11-25 13:27:32.534392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.899 [2024-11-25 13:27:32.534493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.899 [2024-11-25 13:27:32.534528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.899 [2024-11-25 13:27:32.538774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.899 [2024-11-25 13:27:32.538863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.899 [2024-11-25 13:27:32.538892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.899 [2024-11-25 13:27:32.543368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.899 [2024-11-25 13:27:32.543461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.899 [2024-11-25 13:27:32.543501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.899 [2024-11-25 13:27:32.547886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.899 [2024-11-25 13:27:32.547992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.899 [2024-11-25 13:27:32.548032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.899 [2024-11-25 13:27:32.553425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:34.899 [2024-11-25 13:27:32.553499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.899 [2024-11-25 13:27:32.553537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.557832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.557953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.557988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.562891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.563063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.563102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.568458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.568641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.568680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.573957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.574118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.574156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.579389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.579559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.579593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.584956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.585147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.585177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.590352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.590530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.595927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.596084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.596115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.601617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.601805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.601844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.607100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.607248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.607280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.612903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.613097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.613132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.618274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.618428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.618472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.623959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.624153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.624203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.629467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.629634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.629672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.635034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.635209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.635241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.640776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.640906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.640939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.646167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.646375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.646407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.651535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.651708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.651741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.657216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.657424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.657455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.662392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.662534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.662569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.667839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.667990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.668023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.674466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.674588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.674621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.679049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.679142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.679193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.683631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.158 [2024-11-25 13:27:32.683778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.158 [2024-11-25 13:27:32.683812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.158 [2024-11-25 13:27:32.688287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.688441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.688473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.693106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.693214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.693253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.697576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.697740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.697779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.701885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.701981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.702011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.706188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.706328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.706366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.710464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.710544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.710579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.714780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.714928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.714960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.719053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.719175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.719211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.723491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.723650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.723690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.159 6153.00 IOPS, 769.12 MiB/s [2024-11-25T12:27:32.818Z] [2024-11-25 13:27:32.729129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.729284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.729332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.733447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.733575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.733613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.737857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.738040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.738083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.742829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.743003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.743044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.747898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.748093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.748140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.752991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.753172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.753211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.758005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.758124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.758163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.763030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.763209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.763244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.768052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.768260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.768297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.773160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.773368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.773409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.778150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.778334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.778370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.783228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.783405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.783442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.788293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.788476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.788516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.793379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.793548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.793588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.798475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.798646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.798684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.803551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.803731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.803767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.808615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.808802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.808839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.159 [2024-11-25 13:27:32.813645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.159 [2024-11-25 13:27:32.813862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.159 [2024-11-25 13:27:32.813902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.818688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.818832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.818870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.823764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.823945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.823977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.828820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.829033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.829069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.833917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.834118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.834149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.838951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.839175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.839210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.843981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.844138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.844174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.849022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.849232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.849264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.854110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.854320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.854353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.859193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.859401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.859434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.864258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.864462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.864495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.869255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.869422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.869456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.874327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.874502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.874533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.879370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.879576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.879613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.884480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.884662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.884693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.889538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.889738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.889775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.894627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.894849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.894886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.899683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.899910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.899941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.904771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.904993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.905025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.909773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.422 [2024-11-25 13:27:32.909953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.422 [2024-11-25 13:27:32.909986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.422 [2024-11-25 13:27:32.914831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.915052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.915091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.919811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.919996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.920033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.924899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.925201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.925236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.929901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.930074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.930114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.934968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.935170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.935209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.940055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.940257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.940294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.945127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.945326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.945367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.950217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.950409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.950449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.955187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.955359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.955399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.960280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.960446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.960486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.965346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.965528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.965568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.970425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.970611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.970651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.975495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.975686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.975727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.980595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.980775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.980812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.985663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.985847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.985887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.990701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.990900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.990940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:32.995697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:32.995880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:32.995920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.000780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.000959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.000998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.005864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.006046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.006083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.011030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.011221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.011266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.016092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.016195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.016234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.021081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.021275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.021326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.026297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.026497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.026535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.031342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.031554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.031593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.036357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.036510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.036550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.041505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.041726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.041765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.046646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.423 [2024-11-25 13:27:33.046809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.423 [2024-11-25 13:27:33.046847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.423 [2024-11-25 13:27:33.051789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.424 [2024-11-25 13:27:33.052013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.424 [2024-11-25 13:27:33.052053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.424 [2024-11-25 13:27:33.056789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.424 [2024-11-25 13:27:33.056977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.424 [2024-11-25 13:27:33.057015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.424 [2024-11-25 13:27:33.061826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.424 [2024-11-25 13:27:33.062048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.424 [2024-11-25 13:27:33.062085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.424 [2024-11-25 13:27:33.066898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.424 [2024-11-25 13:27:33.067007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.424 [2024-11-25 13:27:33.067046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.424 [2024-11-25 13:27:33.071878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.424 [2024-11-25 13:27:33.072086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.424 [2024-11-25 13:27:33.072125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.424 [2024-11-25 13:27:33.076846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.424 [2024-11-25 13:27:33.077032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.424 [2024-11-25 13:27:33.077071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.081949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.082118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.082154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.087081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.087195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.087234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.092177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.092404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.092444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.097177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.097356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.097396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.102203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.102344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.102388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.107236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.107408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.107453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.112327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.112467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.112507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.117468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.117667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.117716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.122580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.122767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.122807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.127634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.127832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.127872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.132772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.132929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.132968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.137813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.138005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.138044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.142911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.143081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.143128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.147996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.148169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.148208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.153095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.153250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.153289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.158300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.158469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.158509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.163478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.163664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.163703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.168634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.168856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.168896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.173662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.173816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.173854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.683 [2024-11-25 13:27:33.178837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.683 [2024-11-25 13:27:33.179021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.683 [2024-11-25 13:27:33.179060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.183818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.183978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.184018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.189028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.189201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.189240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.194338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.194513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.194553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.199397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.199605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.199645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.204520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.204697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.204737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.209575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.209772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.209811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.214766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.214957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.214997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.219865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.220047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.220088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.225031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.225249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.225289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.230043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.230214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.230253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.235269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.235433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.235472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.240352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.240506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.240547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.245499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.245702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.245742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.250496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.250664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.250703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.255563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.255747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.255786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.260652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.260827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.260866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.265625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.265781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.265820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.270683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.270860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.270900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.275911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.276065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.276113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.280980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.281151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.281190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.286090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.286259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.286299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.291271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.291526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.291560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.296419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.296628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.296667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.301575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.301731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.301770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.306751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.306938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.306973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.311996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.312162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.312198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.317075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.684 [2024-11-25 13:27:33.317272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.684 [2024-11-25 13:27:33.317315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.684 [2024-11-25 13:27:33.322138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.685 [2024-11-25 13:27:33.322328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.685 [2024-11-25 13:27:33.322365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.685 [2024-11-25 13:27:33.327172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.685 [2024-11-25 13:27:33.327428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.685 [2024-11-25 13:27:33.327458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.685 [2024-11-25 13:27:33.332373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.685 [2024-11-25 13:27:33.332590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.685 [2024-11-25 13:27:33.332629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.685 [2024-11-25 13:27:33.337510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.685 [2024-11-25 13:27:33.337714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.685 [2024-11-25 13:27:33.337744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.342655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.342876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.342916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.347838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.348066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.348105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.352949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.353159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.353198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.358052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.358286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.358330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.363239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.363452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.363494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.368412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.368656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.368696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.373502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.373781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.373814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.378620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.378860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.378891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.383730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.383938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.383969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.388790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.389035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.389067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.394043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.394215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.394246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.399163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.399431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.399463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.404315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.404527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.404558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.409397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.409608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.409646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.414524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.414796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.944 [2024-11-25 13:27:33.414826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.944 [2024-11-25 13:27:33.419701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.944 [2024-11-25 13:27:33.419889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.419919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.424880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.425092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.425124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.430004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.430200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.430232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.435195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.435394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.435425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.440287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.440538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.440578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.445356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.445547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.445577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.450483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.450749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.450785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.455576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.455834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.455865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.460666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.460905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.460936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.465774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.466011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.466048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.470887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.471052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.471086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.476375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.476605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.476644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.481500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.481686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.481725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.486519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.486704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.486735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.491695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.491894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.491925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.496783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.496962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.496999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.501995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.502167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.502203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.507034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.507251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.507284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.512204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.512415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.512456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.517420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.517625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.517665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.522508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.522743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.522781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.527578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.527780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.527820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.532728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.532904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.532942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.537909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.538126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.538161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.543024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.543294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.543353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.548154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.548362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.548402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.553207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.553364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.553400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.945 [2024-11-25 13:27:33.558344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.945 [2024-11-25 13:27:33.558523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.945 [2024-11-25 13:27:33.558563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.946 [2024-11-25 13:27:33.563579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.946 [2024-11-25 13:27:33.563769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-25 13:27:33.563802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.946 [2024-11-25 13:27:33.568683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.946 [2024-11-25 13:27:33.568881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-25 13:27:33.568911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.946 [2024-11-25 13:27:33.573767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.946 [2024-11-25 13:27:33.573938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-25 13:27:33.573968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.946 [2024-11-25 13:27:33.578874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.946 [2024-11-25 13:27:33.579145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-25 13:27:33.579181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.946 [2024-11-25 13:27:33.583987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.946 [2024-11-25 13:27:33.584227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-25 13:27:33.584260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.946 [2024-11-25 13:27:33.589232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.946 [2024-11-25 13:27:33.589467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-25 13:27:33.589501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.946 [2024-11-25 13:27:33.594347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.946 [2024-11-25 13:27:33.594522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-25 13:27:33.594565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.946 [2024-11-25 13:27:33.599531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:35.946 [2024-11-25 13:27:33.599735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-25 13:27:33.599767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.204 [2024-11-25 13:27:33.604561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.204 [2024-11-25 13:27:33.604812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.204 [2024-11-25 13:27:33.604844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.204 [2024-11-25 13:27:33.609751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.204 [2024-11-25 13:27:33.610001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.204 [2024-11-25 13:27:33.610042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.204 [2024-11-25 13:27:33.614874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.204 [2024-11-25 13:27:33.615118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.204 [2024-11-25 13:27:33.615156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.204 [2024-11-25 13:27:33.620026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.204 [2024-11-25 13:27:33.620171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.204 [2024-11-25 13:27:33.620206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.204 [2024-11-25 13:27:33.625178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.204 [2024-11-25 13:27:33.625372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.204 [2024-11-25 13:27:33.625407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.204 [2024-11-25 13:27:33.630245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.204 [2024-11-25 13:27:33.630471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.630503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.635362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.635522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.635553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.640448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.640615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.640653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.645647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.645811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.645851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.650826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.651039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.651077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.655915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.656113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.656149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.660993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.661197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.661230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.666149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.666402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.666440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.671141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.671370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.671402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.676275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.676462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.676502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.681392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.681653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.681684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.686563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.686823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.686863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.691721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.691930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.691966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.696853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.697093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.697123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.702022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.702239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.702271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.707188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.707427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.707460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.712321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.712497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.712533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.717429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.717681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.717720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.722766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.723018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.723057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.205 [2024-11-25 13:27:33.727857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x16a6780) with pdu=0x2000166ff3c8 00:28:36.205 [2024-11-25 13:27:33.728121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.205 [2024-11-25 13:27:33.728157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.205 6122.50 IOPS, 765.31 MiB/s 00:28:36.205 Latency(us) 00:28:36.205 [2024-11-25T12:27:33.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.205 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:36.205 nvme0n1 : 2.00 6118.78 764.85 0.00 0.00 2606.47 1784.04 9077.95 00:28:36.205 [2024-11-25T12:27:33.864Z] =================================================================================================================== 00:28:36.205 [2024-11-25T12:27:33.864Z] Total : 6118.78 764.85 0.00 0.00 2606.47 1784.04 9077.95 00:28:36.205 { 00:28:36.205 "results": [ 00:28:36.205 { 00:28:36.205 "job": "nvme0n1", 00:28:36.205 "core_mask": "0x2", 00:28:36.205 "workload": "randwrite", 00:28:36.205 "status": "finished", 00:28:36.205 "queue_depth": 16, 00:28:36.205 "io_size": 131072, 00:28:36.205 "runtime": 2.004485, 00:28:36.205 "iops": 6118.778638902261, 00:28:36.205 "mibps": 764.8473298627827, 00:28:36.205 "io_failed": 0, 00:28:36.205 "io_timeout": 0, 00:28:36.205 "avg_latency_us": 2606.473715329679, 00:28:36.205 "min_latency_us": 1784.0355555555554, 00:28:36.205 "max_latency_us": 9077.94962962963 00:28:36.205 } 00:28:36.205 ], 00:28:36.205 "core_count": 1 00:28:36.205 } 00:28:36.205 13:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.205 13:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.205 13:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.205 | .driver_specific 00:28:36.205 | .nvme_error 00:28:36.205 | .status_code 00:28:36.205 | .command_transient_transport_error' 00:28:36.205 13:27:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 396 > 0 )) 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3280385 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3280385 ']' 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3280385 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3280385 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3280385' 00:28:36.462 killing process with pid 3280385 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3280385 00:28:36.462 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.462 00:28:36.462 Latency(us) 00:28:36.462 [2024-11-25T12:27:34.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.462 [2024-11-25T12:27:34.121Z] =================================================================================================================== 00:28:36.462 [2024-11-25T12:27:34.121Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.462 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3280385 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3278959 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3278959 ']' 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3278959 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3278959 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3278959' 00:28:36.719 killing process with pid 3278959 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3278959 00:28:36.719 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3278959 00:28:36.979 00:28:36.979 real 0m15.670s 00:28:36.979 user 0m31.412s 00:28:36.979 sys 0m4.333s 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.979 ************************************ 00:28:36.979 END TEST nvmf_digest_error 00:28:36.979 ************************************ 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.979 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.979 rmmod nvme_tcp 00:28:36.979 rmmod nvme_fabrics 00:28:36.979 rmmod nvme_keyring 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3278959 ']' 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3278959 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3278959 ']' 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3278959 00:28:37.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3278959) - No such process 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3278959 is not found' 00:28:37.238 Process with pid 3278959 is not found 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.238 13:27:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.142 00:28:39.142 real 0m35.621s 00:28:39.142 user 1m3.482s 00:28:39.142 sys 0m10.106s 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.142 ************************************ 00:28:39.142 END TEST nvmf_digest 00:28:39.142 ************************************ 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.142 ************************************ 00:28:39.142 START TEST nvmf_bdevperf 00:28:39.142 ************************************ 00:28:39.142 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:39.401 * Looking for test storage... 00:28:39.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.401 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:39.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.402 --rc genhtml_branch_coverage=1 00:28:39.402 --rc genhtml_function_coverage=1 00:28:39.402 --rc genhtml_legend=1 00:28:39.402 --rc geninfo_all_blocks=1 00:28:39.402 --rc geninfo_unexecuted_blocks=1 00:28:39.402 00:28:39.402 ' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:39.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.402 --rc genhtml_branch_coverage=1 00:28:39.402 --rc genhtml_function_coverage=1 00:28:39.402 --rc genhtml_legend=1 00:28:39.402 --rc geninfo_all_blocks=1 00:28:39.402 --rc geninfo_unexecuted_blocks=1 00:28:39.402 00:28:39.402 ' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:39.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.402 --rc genhtml_branch_coverage=1 00:28:39.402 --rc genhtml_function_coverage=1 00:28:39.402 --rc genhtml_legend=1 00:28:39.402 --rc geninfo_all_blocks=1 00:28:39.402 --rc geninfo_unexecuted_blocks=1 00:28:39.402 00:28:39.402 ' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:39.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.402 --rc genhtml_branch_coverage=1 00:28:39.402 --rc genhtml_function_coverage=1 00:28:39.402 --rc genhtml_legend=1 00:28:39.402 --rc geninfo_all_blocks=1 00:28:39.402 --rc geninfo_unexecuted_blocks=1 00:28:39.402 00:28:39.402 ' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:39.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.402 13:27:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:41.995 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:41.995 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:41.995 Found net devices under 0000:09:00.0: cvl_0_0 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:41.995 Found net devices under 0000:09:00.1: cvl_0_1 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.995 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:28:41.995 00:28:41.996 --- 10.0.0.2 ping statistics --- 00:28:41.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.996 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:28:41.996 00:28:41.996 --- 10.0.0.1 ping statistics --- 00:28:41.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.996 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3282753 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3282753 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3282753 ']' 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 [2024-11-25 13:27:39.294143] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:41.996 [2024-11-25 13:27:39.294219] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.996 [2024-11-25 13:27:39.370030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:41.996 [2024-11-25 13:27:39.428987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.996 [2024-11-25 13:27:39.429038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.996 [2024-11-25 13:27:39.429065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.996 [2024-11-25 13:27:39.429076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.996 [2024-11-25 13:27:39.429085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.996 [2024-11-25 13:27:39.430596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.996 [2024-11-25 13:27:39.430727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.996 [2024-11-25 13:27:39.430732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 [2024-11-25 13:27:39.563814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 Malloc0 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.996 [2024-11-25 13:27:39.622311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:41.996 { 00:28:41.996 "params": { 00:28:41.996 "name": "Nvme$subsystem", 00:28:41.996 "trtype": "$TEST_TRANSPORT", 00:28:41.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.996 "adrfam": "ipv4", 00:28:41.996 "trsvcid": "$NVMF_PORT", 00:28:41.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.996 "hdgst": ${hdgst:-false}, 00:28:41.996 "ddgst": ${ddgst:-false} 00:28:41.996 }, 00:28:41.996 "method": "bdev_nvme_attach_controller" 00:28:41.996 } 00:28:41.996 EOF 00:28:41.996 )") 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:41.996 13:27:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:41.996 "params": { 00:28:41.996 "name": "Nvme1", 00:28:41.996 "trtype": "tcp", 00:28:41.996 "traddr": "10.0.0.2", 00:28:41.996 "adrfam": "ipv4", 00:28:41.996 "trsvcid": "4420", 00:28:41.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.996 "hdgst": false, 00:28:41.996 "ddgst": false 00:28:41.996 }, 00:28:41.996 "method": "bdev_nvme_attach_controller" 00:28:41.996 }' 00:28:42.254 [2024-11-25 13:27:39.670509] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:42.254 [2024-11-25 13:27:39.670602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282892 ] 00:28:42.254 [2024-11-25 13:27:39.738893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.254 [2024-11-25 13:27:39.798939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.819 Running I/O for 1 seconds... 00:28:43.751 8248.00 IOPS, 32.22 MiB/s 00:28:43.751 Latency(us) 00:28:43.751 [2024-11-25T12:27:41.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.751 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.751 Verification LBA range: start 0x0 length 0x4000 00:28:43.751 Nvme1n1 : 1.02 8335.89 32.56 0.00 0.00 15277.88 3470.98 15049.01 00:28:43.751 [2024-11-25T12:27:41.410Z] =================================================================================================================== 00:28:43.751 [2024-11-25T12:27:41.410Z] Total : 8335.89 32.56 0.00 0.00 15277.88 3470.98 15049.01 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3283042 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.009 { 00:28:44.009 "params": { 00:28:44.009 "name": "Nvme$subsystem", 00:28:44.009 "trtype": "$TEST_TRANSPORT", 00:28:44.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.009 "adrfam": "ipv4", 00:28:44.009 "trsvcid": "$NVMF_PORT", 00:28:44.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.009 "hdgst": ${hdgst:-false}, 00:28:44.009 "ddgst": ${ddgst:-false} 00:28:44.009 }, 00:28:44.009 "method": "bdev_nvme_attach_controller" 00:28:44.009 } 00:28:44.009 EOF 00:28:44.009 )") 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:44.009 13:27:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:44.009 "params": { 00:28:44.009 "name": "Nvme1", 00:28:44.009 "trtype": "tcp", 00:28:44.009 "traddr": "10.0.0.2", 00:28:44.009 "adrfam": "ipv4", 00:28:44.009 "trsvcid": "4420", 00:28:44.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.009 "hdgst": false, 00:28:44.009 "ddgst": false 00:28:44.009 }, 00:28:44.009 "method": "bdev_nvme_attach_controller" 00:28:44.009 }' 00:28:44.009 [2024-11-25 13:27:41.460491] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:44.009 [2024-11-25 13:27:41.460579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283042 ] 00:28:44.009 [2024-11-25 13:27:41.530521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.009 [2024-11-25 13:27:41.588598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.266 Running I/O for 15 seconds... 00:28:46.574 8419.00 IOPS, 32.89 MiB/s [2024-11-25T12:27:44.495Z] 8553.50 IOPS, 33.41 MiB/s [2024-11-25T12:27:44.495Z] 13:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3282753 00:28:46.836 13:27:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:46.836 [2024-11-25 13:27:44.429529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.836 [2024-11-25 13:27:44.429576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.836 [2024-11-25 13:27:44.429608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.836 [2024-11-25 13:27:44.429626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.836 [2024-11-25 13:27:44.429645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.429970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.429988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.430017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.837 [2024-11-25 13:27:44.430045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.837 [2024-11-25 13:27:44.430854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.837 [2024-11-25 13:27:44.430868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.430880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.430894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.430907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.430920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.430933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.430946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.430958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.430972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.430985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.430998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.838 [2024-11-25 13:27:44.431956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-11-25 13:27:44.431980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.838 [2024-11-25 13:27:44.431993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.838 [2024-11-25 13:27:44.432005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:46.839 [2024-11-25 13:27:44.432654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.432985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.432998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.433010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.433023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.433036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.433049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.433061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.433075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.433087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.433101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.433113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.839 [2024-11-25 13:27:44.433126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.839 [2024-11-25 13:27:44.433138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.840 [2024-11-25 13:27:44.433164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.840 [2024-11-25 13:27:44.433189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.840 [2024-11-25 13:27:44.433214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.840 [2024-11-25 13:27:44.433240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.840 [2024-11-25 13:27:44.433265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.840 [2024-11-25 13:27:44.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193fcf0 is same with the state(6) to be set 00:28:46.840 [2024-11-25 13:27:44.433368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.840 [2024-11-25 13:27:44.433379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.840 [2024-11-25 13:27:44.433391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47888 len:8 PRP1 0x0 PRP2 0x0 00:28:46.840 [2024-11-25 13:27:44.433404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.840 [2024-11-25 13:27:44.433546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.840 [2024-11-25 13:27:44.433574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.840 [2024-11-25 13:27:44.433600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.840 [2024-11-25 13:27:44.433634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.840 [2024-11-25 13:27:44.433663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:46.840 [2024-11-25 13:27:44.436809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.840 [2024-11-25 13:27:44.436840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:46.840 [2024-11-25 13:27:44.437445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-11-25 13:27:44.437475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:46.840 [2024-11-25 13:27:44.437492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:46.840 [2024-11-25 13:27:44.437738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:46.840 [2024-11-25 13:27:44.437936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.840 [2024-11-25 13:27:44.437955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.840 [2024-11-25 13:27:44.437969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.840 [2024-11-25 13:27:44.437984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.840 [2024-11-25 13:27:44.450238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.840 [2024-11-25 13:27:44.450601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-11-25 13:27:44.450636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:46.840 [2024-11-25 13:27:44.450667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:46.840 [2024-11-25 13:27:44.450902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:46.840 [2024-11-25 13:27:44.451095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.840 [2024-11-25 13:27:44.451113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.840 [2024-11-25 13:27:44.451126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.840 [2024-11-25 13:27:44.451138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.840 [2024-11-25 13:27:44.463375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.840 [2024-11-25 13:27:44.463770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-11-25 13:27:44.463813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:46.840 [2024-11-25 13:27:44.463829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:46.840 [2024-11-25 13:27:44.464085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:46.840 [2024-11-25 13:27:44.464320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.840 [2024-11-25 13:27:44.464364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.840 [2024-11-25 13:27:44.464378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.840 [2024-11-25 13:27:44.464389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.840 [2024-11-25 13:27:44.476437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.840 [2024-11-25 13:27:44.476843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-11-25 13:27:44.476870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:46.840 [2024-11-25 13:27:44.476901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:46.840 [2024-11-25 13:27:44.477122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:46.840 [2024-11-25 13:27:44.477377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.840 [2024-11-25 13:27:44.477399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.840 [2024-11-25 13:27:44.477413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.840 [2024-11-25 13:27:44.477425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:46.840 [2024-11-25 13:27:44.490117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:46.840 [2024-11-25 13:27:44.490544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.840 [2024-11-25 13:27:44.490573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:46.840 [2024-11-25 13:27:44.490589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:46.840 [2024-11-25 13:27:44.490842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:46.840 [2024-11-25 13:27:44.491050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:46.840 [2024-11-25 13:27:44.491069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:46.840 [2024-11-25 13:27:44.491081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:46.840 [2024-11-25 13:27:44.491091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.101 [2024-11-25 13:27:44.503530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.101 [2024-11-25 13:27:44.503857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.101 [2024-11-25 13:27:44.503899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.101 [2024-11-25 13:27:44.503914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.101 [2024-11-25 13:27:44.504129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.101 [2024-11-25 13:27:44.504364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.101 [2024-11-25 13:27:44.504399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.101 [2024-11-25 13:27:44.504412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.101 [2024-11-25 13:27:44.504424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.101 [2024-11-25 13:27:44.516592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.101 [2024-11-25 13:27:44.516935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.101 [2024-11-25 13:27:44.516963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.101 [2024-11-25 13:27:44.516978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.101 [2024-11-25 13:27:44.517201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.101 [2024-11-25 13:27:44.517456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.101 [2024-11-25 13:27:44.517477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.101 [2024-11-25 13:27:44.517490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.101 [2024-11-25 13:27:44.517502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.101 [2024-11-25 13:27:44.529637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.101 [2024-11-25 13:27:44.530131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.101 [2024-11-25 13:27:44.530173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.101 [2024-11-25 13:27:44.530190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.101 [2024-11-25 13:27:44.530469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.101 [2024-11-25 13:27:44.530674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.101 [2024-11-25 13:27:44.530698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.101 [2024-11-25 13:27:44.530711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.101 [2024-11-25 13:27:44.530723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.101 [2024-11-25 13:27:44.542709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.101 [2024-11-25 13:27:44.543101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.101 [2024-11-25 13:27:44.543142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.101 [2024-11-25 13:27:44.543158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.101 [2024-11-25 13:27:44.543409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.101 [2024-11-25 13:27:44.543630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.101 [2024-11-25 13:27:44.543650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.101 [2024-11-25 13:27:44.543662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.101 [2024-11-25 13:27:44.543673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.101 [2024-11-25 13:27:44.555732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.101 [2024-11-25 13:27:44.556095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.101 [2024-11-25 13:27:44.556122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.101 [2024-11-25 13:27:44.556138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.101 [2024-11-25 13:27:44.556372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.101 [2024-11-25 13:27:44.556615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.102 [2024-11-25 13:27:44.556635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.102 [2024-11-25 13:27:44.556662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.102 [2024-11-25 13:27:44.556673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.102 [2024-11-25 13:27:44.568711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.102 [2024-11-25 13:27:44.569036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.102 [2024-11-25 13:27:44.569062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.102 [2024-11-25 13:27:44.569077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.102 [2024-11-25 13:27:44.569276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.102 [2024-11-25 13:27:44.569521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.102 [2024-11-25 13:27:44.569541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.102 [2024-11-25 13:27:44.569554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.102 [2024-11-25 13:27:44.569566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.102 [2024-11-25 13:27:44.581780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.102 [2024-11-25 13:27:44.582174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.102 [2024-11-25 13:27:44.582201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.102 [2024-11-25 13:27:44.582217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.102 [2024-11-25 13:27:44.582482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.102 [2024-11-25 13:27:44.582697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.102 [2024-11-25 13:27:44.582715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.102 [2024-11-25 13:27:44.582727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.102 [2024-11-25 13:27:44.582738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.102 [2024-11-25 13:27:44.594820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.102 [2024-11-25 13:27:44.595312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.102 [2024-11-25 13:27:44.595355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.102 [2024-11-25 13:27:44.595371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.102 [2024-11-25 13:27:44.595623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.102 [2024-11-25 13:27:44.595829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.102 [2024-11-25 13:27:44.595847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.102 [2024-11-25 13:27:44.595859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.102 [2024-11-25 13:27:44.595869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.102 [2024-11-25 13:27:44.607827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.102 [2024-11-25 13:27:44.608164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.102 [2024-11-25 13:27:44.608192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.102 [2024-11-25 13:27:44.608207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.102 [2024-11-25 13:27:44.608445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.102 [2024-11-25 13:27:44.608690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.102 [2024-11-25 13:27:44.608709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.102 [2024-11-25 13:27:44.608721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.102 [2024-11-25 13:27:44.608732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.102 [2024-11-25 13:27:44.620982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.102 [2024-11-25 13:27:44.621413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.102 [2024-11-25 13:27:44.621445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.102 [2024-11-25 13:27:44.621477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.102 [2024-11-25 13:27:44.621717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.102 [2024-11-25 13:27:44.621924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.102 [2024-11-25 13:27:44.621942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.102 [2024-11-25 13:27:44.621954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.102 [2024-11-25 13:27:44.621965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.102 [2024-11-25 13:27:44.634367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.102 [2024-11-25 13:27:44.634805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.102 [2024-11-25 13:27:44.634833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.102 [2024-11-25 13:27:44.634849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.102 [2024-11-25 13:27:44.635089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.102 [2024-11-25 13:27:44.635313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.102 [2024-11-25 13:27:44.635333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.102 [2024-11-25 13:27:44.635346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.102 [2024-11-25 13:27:44.635379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.102 [2024-11-25 13:27:44.647738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.102 [2024-11-25 13:27:44.648134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.102 [2024-11-25 13:27:44.648182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.102 [2024-11-25 13:27:44.648198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.102 [2024-11-25 13:27:44.648435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.102 [2024-11-25 13:27:44.648677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.648696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.648708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.648720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.103 [2024-11-25 13:27:44.661000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.103 [2024-11-25 13:27:44.661426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.103 [2024-11-25 13:27:44.661455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.103 [2024-11-25 13:27:44.661471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.103 [2024-11-25 13:27:44.661704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.103 [2024-11-25 13:27:44.661918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.661937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.661949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.661960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.103 [2024-11-25 13:27:44.674291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.103 [2024-11-25 13:27:44.674675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.103 [2024-11-25 13:27:44.674704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.103 [2024-11-25 13:27:44.674720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.103 [2024-11-25 13:27:44.674947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.103 [2024-11-25 13:27:44.675160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.675179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.675191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.675202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.103 [2024-11-25 13:27:44.687490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.103 [2024-11-25 13:27:44.687952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.103 [2024-11-25 13:27:44.688003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.103 [2024-11-25 13:27:44.688019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.103 [2024-11-25 13:27:44.688288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.103 [2024-11-25 13:27:44.688517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.688538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.688551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.688562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.103 [2024-11-25 13:27:44.700927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.103 [2024-11-25 13:27:44.701362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.103 [2024-11-25 13:27:44.701392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.103 [2024-11-25 13:27:44.701409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.103 [2024-11-25 13:27:44.701651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.103 [2024-11-25 13:27:44.701866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.701890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.701904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.701916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.103 [2024-11-25 13:27:44.714369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.103 [2024-11-25 13:27:44.714798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.103 [2024-11-25 13:27:44.714827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.103 [2024-11-25 13:27:44.714843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.103 [2024-11-25 13:27:44.715075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.103 [2024-11-25 13:27:44.715325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.715347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.715360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.715387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.103 [2024-11-25 13:27:44.727874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.103 [2024-11-25 13:27:44.728183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.103 [2024-11-25 13:27:44.728226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.103 [2024-11-25 13:27:44.728242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.103 [2024-11-25 13:27:44.728511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.103 [2024-11-25 13:27:44.728729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.728764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.728776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.728788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.103 [2024-11-25 13:27:44.741191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.103 [2024-11-25 13:27:44.741631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.103 [2024-11-25 13:27:44.741660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.103 [2024-11-25 13:27:44.741675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.103 [2024-11-25 13:27:44.741903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.103 [2024-11-25 13:27:44.742137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.742157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.742170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.742182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.103 [2024-11-25 13:27:44.754767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.103 [2024-11-25 13:27:44.755189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.103 [2024-11-25 13:27:44.755218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.103 [2024-11-25 13:27:44.755234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.103 [2024-11-25 13:27:44.755456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.103 [2024-11-25 13:27:44.755704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.103 [2024-11-25 13:27:44.755725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.103 [2024-11-25 13:27:44.755739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.103 [2024-11-25 13:27:44.755752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.363 [2024-11-25 13:27:44.768043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.363 [2024-11-25 13:27:44.768424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.363 [2024-11-25 13:27:44.768453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.363 [2024-11-25 13:27:44.768468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.363 [2024-11-25 13:27:44.768699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.363 [2024-11-25 13:27:44.768914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.363 [2024-11-25 13:27:44.768932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.363 [2024-11-25 13:27:44.768945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.363 [2024-11-25 13:27:44.768956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.363 [2024-11-25 13:27:44.781330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.363 [2024-11-25 13:27:44.781695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.363 [2024-11-25 13:27:44.781723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.363 [2024-11-25 13:27:44.781738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.363 [2024-11-25 13:27:44.781965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.363 [2024-11-25 13:27:44.782180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.363 [2024-11-25 13:27:44.782199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.363 [2024-11-25 13:27:44.782211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.363 [2024-11-25 13:27:44.782221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.363 [2024-11-25 13:27:44.794548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.363 [2024-11-25 13:27:44.794937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.363 [2024-11-25 13:27:44.794970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.363 [2024-11-25 13:27:44.794986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.363 [2024-11-25 13:27:44.795219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.363 [2024-11-25 13:27:44.795449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.363 [2024-11-25 13:27:44.795470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.363 [2024-11-25 13:27:44.795483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.363 [2024-11-25 13:27:44.795494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.363 [2024-11-25 13:27:44.807819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.363 [2024-11-25 13:27:44.808193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.363 [2024-11-25 13:27:44.808237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.363 [2024-11-25 13:27:44.808252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.363 [2024-11-25 13:27:44.808516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.363 [2024-11-25 13:27:44.808733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.363 [2024-11-25 13:27:44.808753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.363 [2024-11-25 13:27:44.808765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.363 [2024-11-25 13:27:44.808776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.363 [2024-11-25 13:27:44.821095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.363 [2024-11-25 13:27:44.821433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.363 [2024-11-25 13:27:44.821477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.363 [2024-11-25 13:27:44.821493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.363 [2024-11-25 13:27:44.821724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.363 [2024-11-25 13:27:44.821938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.363 [2024-11-25 13:27:44.821957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.363 [2024-11-25 13:27:44.821969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.363 [2024-11-25 13:27:44.821981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.363 7441.00 IOPS, 29.07 MiB/s [2024-11-25T12:27:45.022Z] [2024-11-25 13:27:44.835822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.363 [2024-11-25 13:27:44.836193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.363 [2024-11-25 13:27:44.836221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.836237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.836481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.836719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.836738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.836751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.836762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.849140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.849578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.849606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.849622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.849850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.850064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.850083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.850095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.850106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.862393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.862799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.862829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.862845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.863085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.863325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.863346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.863359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.863371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.875570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.875993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.876022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.876037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.876265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.876507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.876536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.876550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.876562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.888762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.889072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.889099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.889114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.889326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.889531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.889550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.889563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.889575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.901995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.902411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.902440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.902456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.902685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.902898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.902917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.902929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.902940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.915252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.915668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.915696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.915712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.915938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.916152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.916171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.916183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.916199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.928610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.929108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.929136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.929166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.929417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.929637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.929656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.929668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.929680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.941855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.942167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.942193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.942208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.942452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.942670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.942690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.942703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.942715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.955255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.955640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.364 [2024-11-25 13:27:44.955684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.364 [2024-11-25 13:27:44.955700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.364 [2024-11-25 13:27:44.955954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.364 [2024-11-25 13:27:44.956152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.364 [2024-11-25 13:27:44.956171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.364 [2024-11-25 13:27:44.956183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.364 [2024-11-25 13:27:44.956195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.364 [2024-11-25 13:27:44.968567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.364 [2024-11-25 13:27:44.968962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.365 [2024-11-25 13:27:44.969010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.365 [2024-11-25 13:27:44.969026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.365 [2024-11-25 13:27:44.969279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.365 [2024-11-25 13:27:44.969491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.365 [2024-11-25 13:27:44.969511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.365 [2024-11-25 13:27:44.969524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.365 [2024-11-25 13:27:44.969536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.365 [2024-11-25 13:27:44.981913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.365 [2024-11-25 13:27:44.982350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.365 [2024-11-25 13:27:44.982379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.365 [2024-11-25 13:27:44.982395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.365 [2024-11-25 13:27:44.982636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.365 [2024-11-25 13:27:44.982835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.365 [2024-11-25 13:27:44.982854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.365 [2024-11-25 13:27:44.982866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.365 [2024-11-25 13:27:44.982877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.365 [2024-11-25 13:27:44.995229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.365 [2024-11-25 13:27:44.995662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.365 [2024-11-25 13:27:44.995690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.365 [2024-11-25 13:27:44.995706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.365 [2024-11-25 13:27:44.995936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.365 [2024-11-25 13:27:44.996151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.365 [2024-11-25 13:27:44.996170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.365 [2024-11-25 13:27:44.996182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.365 [2024-11-25 13:27:44.996193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.365 [2024-11-25 13:27:45.008466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.365 [2024-11-25 13:27:45.008858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.365 [2024-11-25 13:27:45.008887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.365 [2024-11-25 13:27:45.008903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.365 [2024-11-25 13:27:45.009149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.365 [2024-11-25 13:27:45.009377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.365 [2024-11-25 13:27:45.009398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.365 [2024-11-25 13:27:45.009411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.365 [2024-11-25 13:27:45.009423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.625 [2024-11-25 13:27:45.021952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.625 [2024-11-25 13:27:45.022292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-11-25 13:27:45.022327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.625 [2024-11-25 13:27:45.022358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.625 [2024-11-25 13:27:45.022600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.625 [2024-11-25 13:27:45.022814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.625 [2024-11-25 13:27:45.022833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.625 [2024-11-25 13:27:45.022845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.625 [2024-11-25 13:27:45.022856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.625 [2024-11-25 13:27:45.035216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.625 [2024-11-25 13:27:45.035612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-11-25 13:27:45.035656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.625 [2024-11-25 13:27:45.035672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.625 [2024-11-25 13:27:45.035906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.625 [2024-11-25 13:27:45.036103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.625 [2024-11-25 13:27:45.036122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.625 [2024-11-25 13:27:45.036134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.625 [2024-11-25 13:27:45.036146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.625 [2024-11-25 13:27:45.048551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.625 [2024-11-25 13:27:45.048885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-11-25 13:27:45.048914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.625 [2024-11-25 13:27:45.048929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.625 [2024-11-25 13:27:45.049157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.625 [2024-11-25 13:27:45.049381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.625 [2024-11-25 13:27:45.049406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.625 [2024-11-25 13:27:45.049419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.625 [2024-11-25 13:27:45.049430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.625 [2024-11-25 13:27:45.061828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.625 [2024-11-25 13:27:45.062200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-11-25 13:27:45.062228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.625 [2024-11-25 13:27:45.062244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.625 [2024-11-25 13:27:45.062481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.625 [2024-11-25 13:27:45.062715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.625 [2024-11-25 13:27:45.062735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.625 [2024-11-25 13:27:45.062747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.625 [2024-11-25 13:27:45.062758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.625 [2024-11-25 13:27:45.075131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.625 [2024-11-25 13:27:45.075482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-11-25 13:27:45.075510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.625 [2024-11-25 13:27:45.075526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.625 [2024-11-25 13:27:45.075768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.625 [2024-11-25 13:27:45.075980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.625 [2024-11-25 13:27:45.076000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.626 [2024-11-25 13:27:45.076012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.626 [2024-11-25 13:27:45.076024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.626 [2024-11-25 13:27:45.088472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.626 [2024-11-25 13:27:45.088818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-11-25 13:27:45.088845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.626 [2024-11-25 13:27:45.088860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.626 [2024-11-25 13:27:45.089082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.626 [2024-11-25 13:27:45.089322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.626 [2024-11-25 13:27:45.089343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.626 [2024-11-25 13:27:45.089355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.626 [2024-11-25 13:27:45.089372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.626 [2024-11-25 13:27:45.101751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.626 [2024-11-25 13:27:45.102149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-11-25 13:27:45.102176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.626 [2024-11-25 13:27:45.102207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.626 [2024-11-25 13:27:45.102445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.626 [2024-11-25 13:27:45.102664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.626 [2024-11-25 13:27:45.102683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.626 [2024-11-25 13:27:45.102695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.626 [2024-11-25 13:27:45.102706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.626 [2024-11-25 13:27:45.114989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.626 [2024-11-25 13:27:45.115392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-11-25 13:27:45.115421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.626 [2024-11-25 13:27:45.115437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.626 [2024-11-25 13:27:45.115664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.626 [2024-11-25 13:27:45.115878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.626 [2024-11-25 13:27:45.115897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.626 [2024-11-25 13:27:45.115909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.626 [2024-11-25 13:27:45.115920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.626 [2024-11-25 13:27:45.128342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.626 [2024-11-25 13:27:45.128705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-11-25 13:27:45.128733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.626 [2024-11-25 13:27:45.128749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.626 [2024-11-25 13:27:45.128976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.626 [2024-11-25 13:27:45.129190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.626 [2024-11-25 13:27:45.129210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.626 [2024-11-25 13:27:45.129236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.626 [2024-11-25 13:27:45.129248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.626 [2024-11-25 13:27:45.141604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.626 [2024-11-25 13:27:45.141973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-11-25 13:27:45.142006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.626 [2024-11-25 13:27:45.142022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.626 [2024-11-25 13:27:45.142263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.626 [2024-11-25 13:27:45.142497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.626 [2024-11-25 13:27:45.142518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.626 [2024-11-25 13:27:45.142531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.626 [2024-11-25 13:27:45.142543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.626 [2024-11-25 13:27:45.154902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.626 [2024-11-25 13:27:45.155338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-11-25 13:27:45.155367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.626 [2024-11-25 13:27:45.155383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.626 [2024-11-25 13:27:45.155624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.626 [2024-11-25 13:27:45.155822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.626 [2024-11-25 13:27:45.155840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.626 [2024-11-25 13:27:45.155852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.626 [2024-11-25 13:27:45.155863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.626 [2024-11-25 13:27:45.168199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.626 [2024-11-25 13:27:45.168654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-11-25 13:27:45.168682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.626 [2024-11-25 13:27:45.168712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.626 [2024-11-25 13:27:45.168938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.626 [2024-11-25 13:27:45.169152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.626 [2024-11-25 13:27:45.169171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.626 [2024-11-25 13:27:45.169184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.626 [2024-11-25 13:27:45.169195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.626 [2024-11-25 13:27:45.181418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.626 [2024-11-25 13:27:45.181784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-11-25 13:27:45.181812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.626 [2024-11-25 13:27:45.181828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.626 [2024-11-25 13:27:45.182077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.627 [2024-11-25 13:27:45.182275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.627 [2024-11-25 13:27:45.182321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.627 [2024-11-25 13:27:45.182335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.627 [2024-11-25 13:27:45.182347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.627 [2024-11-25 13:27:45.194732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.627 [2024-11-25 13:27:45.195169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-11-25 13:27:45.195197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.627 [2024-11-25 13:27:45.195213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.627 [2024-11-25 13:27:45.195454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.627 [2024-11-25 13:27:45.195676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.627 [2024-11-25 13:27:45.195696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.627 [2024-11-25 13:27:45.195709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.627 [2024-11-25 13:27:45.195720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.627 [2024-11-25 13:27:45.207991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.627 [2024-11-25 13:27:45.208381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-11-25 13:27:45.208411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.627 [2024-11-25 13:27:45.208427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.627 [2024-11-25 13:27:45.208656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.627 [2024-11-25 13:27:45.208888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.627 [2024-11-25 13:27:45.208907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.627 [2024-11-25 13:27:45.208919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.627 [2024-11-25 13:27:45.208946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.627 [2024-11-25 13:27:45.221269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.627 [2024-11-25 13:27:45.221664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-11-25 13:27:45.221707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.627 [2024-11-25 13:27:45.221722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.627 [2024-11-25 13:27:45.221969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.627 [2024-11-25 13:27:45.222168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.627 [2024-11-25 13:27:45.222191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.627 [2024-11-25 13:27:45.222204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.627 [2024-11-25 13:27:45.222215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.627 [2024-11-25 13:27:45.234576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.627 [2024-11-25 13:27:45.234981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-11-25 13:27:45.235009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.627 [2024-11-25 13:27:45.235024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.627 [2024-11-25 13:27:45.235245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.627 [2024-11-25 13:27:45.235488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.627 [2024-11-25 13:27:45.235509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.627 [2024-11-25 13:27:45.235522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.627 [2024-11-25 13:27:45.235533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.627 [2024-11-25 13:27:45.247891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.627 [2024-11-25 13:27:45.248264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-11-25 13:27:45.248291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.627 [2024-11-25 13:27:45.248317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.627 [2024-11-25 13:27:45.248560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.627 [2024-11-25 13:27:45.248773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.627 [2024-11-25 13:27:45.248792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.627 [2024-11-25 13:27:45.248804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.627 [2024-11-25 13:27:45.248815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.627 [2024-11-25 13:27:45.261195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.627 [2024-11-25 13:27:45.261629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-11-25 13:27:45.261657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.627 [2024-11-25 13:27:45.261673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.627 [2024-11-25 13:27:45.261903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.627 [2024-11-25 13:27:45.262117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.627 [2024-11-25 13:27:45.262136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.627 [2024-11-25 13:27:45.262149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.627 [2024-11-25 13:27:45.262165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.627 [2024-11-25 13:27:45.274546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.627 [2024-11-25 13:27:45.275003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-11-25 13:27:45.275032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.627 [2024-11-25 13:27:45.275048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.627 [2024-11-25 13:27:45.275289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.627 [2024-11-25 13:27:45.275518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.627 [2024-11-25 13:27:45.275539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.627 [2024-11-25 13:27:45.275551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.627 [2024-11-25 13:27:45.275563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.887 [2024-11-25 13:27:45.287909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.887 [2024-11-25 13:27:45.288283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.887 [2024-11-25 13:27:45.288318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.887 [2024-11-25 13:27:45.288336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.887 [2024-11-25 13:27:45.288549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.887 [2024-11-25 13:27:45.288802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.887 [2024-11-25 13:27:45.288821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.288833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.288844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.301237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.301635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.301678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.301694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.301947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.302145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.302164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.302175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.302186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.314546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.314935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.314967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.314983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.315216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.315445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.315465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.315478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.315490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.327881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.328318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.328347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.328362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.328590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.328806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.328825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.328837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.328848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.341214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.341612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.341655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.341671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.341924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.342137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.342155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.342167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.342178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.354525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.354916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.354960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.354975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.355247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.355476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.355497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.355509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.355521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.367837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.368207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.368235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.368251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.368494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.368735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.368754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.368766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.368778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.381117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.381545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.381574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.381589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.381818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.382034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.382053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.382066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.382077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.394455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.394844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.394872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.394887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.395121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.395345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.395370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.395383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.395395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.407779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.408129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.408157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.408173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.408411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.408645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.888 [2024-11-25 13:27:45.408664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.888 [2024-11-25 13:27:45.408676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.888 [2024-11-25 13:27:45.408688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.888 [2024-11-25 13:27:45.421109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.888 [2024-11-25 13:27:45.421460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.888 [2024-11-25 13:27:45.421488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.888 [2024-11-25 13:27:45.421504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.888 [2024-11-25 13:27:45.421731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.888 [2024-11-25 13:27:45.421944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.421963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.421975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.421986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.434412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.434869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.434898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.434914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.435155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.435398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.435419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.435432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.435448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.447813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.448216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.448245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.448261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.448512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.448730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.448750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.448763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.448775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.461296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.461827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.461857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.461874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.462115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.462338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.462374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.462387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.462400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.474477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.474905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.474933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.474949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.475178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.475421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.475442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.475455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.475466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.487739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.488143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.488177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.488194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.488416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.488673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.488692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.488705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.488716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.500998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.501374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.501417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.501433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.501685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.501883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.501903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.501915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.501926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.514220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.514661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.514690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.514706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.514934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.515147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.515166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.515179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.515190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.527482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.527837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.527864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.527879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.528101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.528340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.528361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.528373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.528385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.889 [2024-11-25 13:27:45.540850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.889 [2024-11-25 13:27:45.541192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.889 [2024-11-25 13:27:45.541220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:47.889 [2024-11-25 13:27:45.541236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:47.889 [2024-11-25 13:27:45.541490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:47.889 [2024-11-25 13:27:45.541719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.889 [2024-11-25 13:27:45.541737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.889 [2024-11-25 13:27:45.541750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.889 [2024-11-25 13:27:45.541760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.150 [2024-11-25 13:27:45.554118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.150 [2024-11-25 13:27:45.554561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.150 [2024-11-25 13:27:45.554592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.150 [2024-11-25 13:27:45.554607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.150 [2024-11-25 13:27:45.554849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.150 [2024-11-25 13:27:45.555042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.150 [2024-11-25 13:27:45.555060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.150 [2024-11-25 13:27:45.555072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.150 [2024-11-25 13:27:45.555082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.150 [2024-11-25 13:27:45.567223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.150 [2024-11-25 13:27:45.567660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.150 [2024-11-25 13:27:45.567703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.150 [2024-11-25 13:27:45.567719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.150 [2024-11-25 13:27:45.567960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.150 [2024-11-25 13:27:45.568152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.150 [2024-11-25 13:27:45.568176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.150 [2024-11-25 13:27:45.568188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.150 [2024-11-25 13:27:45.568199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.150 [2024-11-25 13:27:45.580344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.150 [2024-11-25 13:27:45.580777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.150 [2024-11-25 13:27:45.580819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.150 [2024-11-25 13:27:45.580834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.150 [2024-11-25 13:27:45.581085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.150 [2024-11-25 13:27:45.581318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.150 [2024-11-25 13:27:45.581337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.150 [2024-11-25 13:27:45.581349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.150 [2024-11-25 13:27:45.581361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.150 [2024-11-25 13:27:45.593488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.150 [2024-11-25 13:27:45.593806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.150 [2024-11-25 13:27:45.593846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.150 [2024-11-25 13:27:45.593861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.150 [2024-11-25 13:27:45.594076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.150 [2024-11-25 13:27:45.594330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.150 [2024-11-25 13:27:45.594351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.150 [2024-11-25 13:27:45.594363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.150 [2024-11-25 13:27:45.594375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.150 [2024-11-25 13:27:45.606628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.150 [2024-11-25 13:27:45.607127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.150 [2024-11-25 13:27:45.607169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.150 [2024-11-25 13:27:45.607185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.150 [2024-11-25 13:27:45.607466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.150 [2024-11-25 13:27:45.607698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.150 [2024-11-25 13:27:45.607716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.150 [2024-11-25 13:27:45.607728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.150 [2024-11-25 13:27:45.607739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.150 [2024-11-25 13:27:45.619796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.150 [2024-11-25 13:27:45.620175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.150 [2024-11-25 13:27:45.620218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.150 [2024-11-25 13:27:45.620233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.150 [2024-11-25 13:27:45.620483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.150 [2024-11-25 13:27:45.620738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.150 [2024-11-25 13:27:45.620758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.150 [2024-11-25 13:27:45.620770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.150 [2024-11-25 13:27:45.620781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.150 [2024-11-25 13:27:45.632906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.150 [2024-11-25 13:27:45.633399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.150 [2024-11-25 13:27:45.633427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.150 [2024-11-25 13:27:45.633457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.150 [2024-11-25 13:27:45.633709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.633900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.633919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.633931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.633941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.646094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.646421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.646462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.646478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.646701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.646909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.646928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.646939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.646950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.659308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.659647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.659679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.659694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.659894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.660102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.660121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.660133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.660143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.672329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.672822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.672864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.672880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.673129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.673363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.673383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.673395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.673406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.685552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.685939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.685967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.685983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.686221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.686458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.686479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.686491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.686502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.698570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.698993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.699036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.699052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.699332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.699545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.699565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.699578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.699589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.711781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.712099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.712188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.712204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.712472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.712684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.712703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.712716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.712727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.725045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.725472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.725516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.725531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.725784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.725992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.726010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.726021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.726032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.738149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.738574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.738620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.738636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.738885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.739077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.739100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.739112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.739123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.751400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.751784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.751826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.751841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.752087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.151 [2024-11-25 13:27:45.752318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.151 [2024-11-25 13:27:45.752339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.151 [2024-11-25 13:27:45.752351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.151 [2024-11-25 13:27:45.752363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.151 [2024-11-25 13:27:45.764421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.151 [2024-11-25 13:27:45.764781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.151 [2024-11-25 13:27:45.764821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.151 [2024-11-25 13:27:45.764836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.151 [2024-11-25 13:27:45.765076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.152 [2024-11-25 13:27:45.765268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.152 [2024-11-25 13:27:45.765286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.152 [2024-11-25 13:27:45.765298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.152 [2024-11-25 13:27:45.765335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.152 [2024-11-25 13:27:45.777642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.152 [2024-11-25 13:27:45.777958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.152 [2024-11-25 13:27:45.777984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.152 [2024-11-25 13:27:45.777999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.152 [2024-11-25 13:27:45.778217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.152 [2024-11-25 13:27:45.778456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.152 [2024-11-25 13:27:45.778476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.152 [2024-11-25 13:27:45.778488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.152 [2024-11-25 13:27:45.778499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.152 [2024-11-25 13:27:45.790737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.152 [2024-11-25 13:27:45.791127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.152 [2024-11-25 13:27:45.791168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.152 [2024-11-25 13:27:45.791183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.152 [2024-11-25 13:27:45.791436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.152 [2024-11-25 13:27:45.791664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.152 [2024-11-25 13:27:45.791683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.152 [2024-11-25 13:27:45.791695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.152 [2024-11-25 13:27:45.791705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.152 [2024-11-25 13:27:45.804051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.152 [2024-11-25 13:27:45.804480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.152 [2024-11-25 13:27:45.804509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.152 [2024-11-25 13:27:45.804524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.152 [2024-11-25 13:27:45.804743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.152 [2024-11-25 13:27:45.804981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.152 [2024-11-25 13:27:45.805002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.152 [2024-11-25 13:27:45.805015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.152 [2024-11-25 13:27:45.805043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.411 [2024-11-25 13:27:45.817159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.411 [2024-11-25 13:27:45.817580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.411 [2024-11-25 13:27:45.817607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.411 [2024-11-25 13:27:45.817622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.411 [2024-11-25 13:27:45.817837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.411 [2024-11-25 13:27:45.818045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.411 [2024-11-25 13:27:45.818063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.411 [2024-11-25 13:27:45.818075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.411 [2024-11-25 13:27:45.818086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.830313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.830695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.830729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.830745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.830984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.831192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.831210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.831222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.831233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 5580.75 IOPS, 21.80 MiB/s [2024-11-25T12:27:46.071Z] [2024-11-25 13:27:45.843458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.843808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.843836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.843852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.844077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.844285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.844328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.844343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.844354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.856727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.857219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.857260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.857276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.857534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.857744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.857762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.857774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.857784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.869794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.870094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.870120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.870135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.870359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.870573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.870593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.870605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.870631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.882936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.883385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.883413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.883429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.883663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.883870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.883889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.883901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.883912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.896028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.896462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.896491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.896507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.896747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.896954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.896973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.896984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.896996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.909122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.909530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.909573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.909589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.909816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.910024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.910047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.910059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.910070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.922385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.922833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.922860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.922891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.923129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.923365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.923385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.923397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.923408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.935406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.935771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.935799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.935830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.936080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.936288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.936316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.412 [2024-11-25 13:27:45.936330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.412 [2024-11-25 13:27:45.936341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.412 [2024-11-25 13:27:45.948390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.412 [2024-11-25 13:27:45.948881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.412 [2024-11-25 13:27:45.948908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.412 [2024-11-25 13:27:45.948939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.412 [2024-11-25 13:27:45.949187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.412 [2024-11-25 13:27:45.949407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.412 [2024-11-25 13:27:45.949426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:45.949438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:45.949454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:45.961594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:45.961991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:45.962018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:45.962034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:45.962249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:45.962486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:45.962520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:45.962534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:45.962546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:45.974874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:45.975252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:45.975278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:45.975292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:45.975538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:45.975763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:45.975782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:45.975793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:45.975804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:45.987961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:45.988591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:45.988648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:45.988667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:45.988899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:45.989093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:45.989112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:45.989123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:45.989134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:46.001002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:46.001382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:46.001416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:46.001432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:46.001667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:46.001860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:46.001879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:46.001891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:46.001902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:46.014236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:46.014760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:46.014804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:46.014821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:46.015067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:46.015259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:46.015278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:46.015315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:46.015330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:46.027251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:46.027640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:46.027669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:46.027684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:46.027917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:46.028110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:46.028128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:46.028140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:46.028151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:46.040405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:46.040820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:46.040848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:46.040864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:46.041090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:46.041325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:46.041345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:46.041357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:46.041368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:46.053538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:46.053953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:46.053995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:46.054011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:46.054234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:46.054472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:46.054493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:46.054505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:46.054517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.413 [2024-11-25 13:27:46.067130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.413 [2024-11-25 13:27:46.067569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.413 [2024-11-25 13:27:46.067636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.413 [2024-11-25 13:27:46.067652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.413 [2024-11-25 13:27:46.067905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.413 [2024-11-25 13:27:46.068150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.413 [2024-11-25 13:27:46.068185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.413 [2024-11-25 13:27:46.068199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.413 [2024-11-25 13:27:46.068211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.080293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.080664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.080692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.080721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.080970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.081177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.081200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.081213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.081224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.093335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.093696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.093737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.093752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.093997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.094188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.094207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.094219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.094230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.106453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.106961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.106989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.107021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.107270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.107486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.107506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.107517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.107528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.119627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.120115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.120170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.120185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.120443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.120656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.120675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.120687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.120703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.132734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.133099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.133142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.133158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.133422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.133655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.133674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.133686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.133697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.145874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.146285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.146344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.146360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.146627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.146837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.146855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.146867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.146878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.158977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.159353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.159396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.159412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.159654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.159862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.159881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.159892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.159903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.172217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.172658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.172718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.172734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.172978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.173170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.173188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.173200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.173210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.185379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.185766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.185808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.185824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.186077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.674 [2024-11-25 13:27:46.186298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.674 [2024-11-25 13:27:46.186329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.674 [2024-11-25 13:27:46.186341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.674 [2024-11-25 13:27:46.186367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.674 [2024-11-25 13:27:46.198571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.674 [2024-11-25 13:27:46.199030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.674 [2024-11-25 13:27:46.199081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.674 [2024-11-25 13:27:46.199096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.674 [2024-11-25 13:27:46.199365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.199559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.199579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.199592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.199603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.211875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.212239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.212266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.212281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.212551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.212778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.212797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.212809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.212820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.225016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.225398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.225441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.225456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.225718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.225910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.225928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.225940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.225951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.238135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.238572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.238629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.238645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.238896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.239107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.239125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.239137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.239148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.251348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.251771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.251822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.251837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.252098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.252315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.252340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.252368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.252379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.264510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.264944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.264995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.265010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.265268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.265488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.265509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.265521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.265532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.277585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.277959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.277987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.278002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.278237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.278478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.278498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.278511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.278521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.290669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.290982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.291063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.291078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.291322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.291520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.291539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.291551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.291570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.303632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.304025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.304052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.304067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.304288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.304512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.304531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.304543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.304554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.316761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.317130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.317174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.675 [2024-11-25 13:27:46.317190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.675 [2024-11-25 13:27:46.317468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.675 [2024-11-25 13:27:46.317682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.675 [2024-11-25 13:27:46.317701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.675 [2024-11-25 13:27:46.317713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.675 [2024-11-25 13:27:46.317724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.675 [2024-11-25 13:27:46.330382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.675 [2024-11-25 13:27:46.330720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.675 [2024-11-25 13:27:46.330748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.934 [2024-11-25 13:27:46.330764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.934 [2024-11-25 13:27:46.330991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.934 [2024-11-25 13:27:46.331212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.934 [2024-11-25 13:27:46.331231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.934 [2024-11-25 13:27:46.331244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.934 [2024-11-25 13:27:46.331255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.934 [2024-11-25 13:27:46.343596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.934 [2024-11-25 13:27:46.344006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.934 [2024-11-25 13:27:46.344039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.934 [2024-11-25 13:27:46.344056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.934 [2024-11-25 13:27:46.344281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.934 [2024-11-25 13:27:46.344520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.934 [2024-11-25 13:27:46.344540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.934 [2024-11-25 13:27:46.344552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.934 [2024-11-25 13:27:46.344563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.934 [2024-11-25 13:27:46.356640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.934 [2024-11-25 13:27:46.357129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.934 [2024-11-25 13:27:46.357155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.934 [2024-11-25 13:27:46.357185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.934 [2024-11-25 13:27:46.357426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.934 [2024-11-25 13:27:46.357625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.934 [2024-11-25 13:27:46.357657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.934 [2024-11-25 13:27:46.357668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.934 [2024-11-25 13:27:46.357679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.934 [2024-11-25 13:27:46.369800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.934 [2024-11-25 13:27:46.370193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.934 [2024-11-25 13:27:46.370221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.934 [2024-11-25 13:27:46.370236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.934 [2024-11-25 13:27:46.370468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.934 [2024-11-25 13:27:46.370701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.934 [2024-11-25 13:27:46.370719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.934 [2024-11-25 13:27:46.370731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.934 [2024-11-25 13:27:46.370742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.934 [2024-11-25 13:27:46.383021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.934 [2024-11-25 13:27:46.383402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.383430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.383445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.383690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.383898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.383917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.383929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.383939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.396110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.396478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.396506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.396521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.396755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.396962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.396981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.396993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.397003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.409345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.409686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.409742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.409786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.410014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.410207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.410225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.410237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.410248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.422354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.422790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.422832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.422849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.423089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.423291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.423324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.423338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.423349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.435363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.435723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.435751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.435766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.435984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.436191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.436209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.436221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.436232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.448445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.448825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.448866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.448880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.449121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.449340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.449360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.449372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.449384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.461691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.462185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.462227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.462244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.462492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.462736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.462756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.462769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.462786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.475095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.475546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.475575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.475591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.475831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.476038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.476057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.476068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.476079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.488078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.488509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.488551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.488567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.488807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.489015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.489033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.489044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.489055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.501173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.501668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.935 [2024-11-25 13:27:46.501710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.935 [2024-11-25 13:27:46.501726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.935 [2024-11-25 13:27:46.501976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.935 [2024-11-25 13:27:46.502182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.935 [2024-11-25 13:27:46.502201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.935 [2024-11-25 13:27:46.502212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.935 [2024-11-25 13:27:46.502223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.935 [2024-11-25 13:27:46.514395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.935 [2024-11-25 13:27:46.514790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.936 [2024-11-25 13:27:46.514821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.936 [2024-11-25 13:27:46.514836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.936 [2024-11-25 13:27:46.515050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.936 [2024-11-25 13:27:46.515258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.936 [2024-11-25 13:27:46.515276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.936 [2024-11-25 13:27:46.515312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.936 [2024-11-25 13:27:46.515327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.936 [2024-11-25 13:27:46.527625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.936 [2024-11-25 13:27:46.528013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.936 [2024-11-25 13:27:46.528041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.936 [2024-11-25 13:27:46.528056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.936 [2024-11-25 13:27:46.528276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.936 [2024-11-25 13:27:46.528535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.936 [2024-11-25 13:27:46.528557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.936 [2024-11-25 13:27:46.528571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.936 [2024-11-25 13:27:46.528584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.936 [2024-11-25 13:27:46.540961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.936 [2024-11-25 13:27:46.541352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.936 [2024-11-25 13:27:46.541381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.936 [2024-11-25 13:27:46.541396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.936 [2024-11-25 13:27:46.541625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.936 [2024-11-25 13:27:46.541840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.936 [2024-11-25 13:27:46.541859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.936 [2024-11-25 13:27:46.541871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.936 [2024-11-25 13:27:46.541883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.936 [2024-11-25 13:27:46.554397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.936 [2024-11-25 13:27:46.554886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.936 [2024-11-25 13:27:46.554914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.936 [2024-11-25 13:27:46.554944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.936 [2024-11-25 13:27:46.555200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.936 [2024-11-25 13:27:46.555446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.936 [2024-11-25 13:27:46.555467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.936 [2024-11-25 13:27:46.555480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.936 [2024-11-25 13:27:46.555492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.936 [2024-11-25 13:27:46.567742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.936 [2024-11-25 13:27:46.568176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.936 [2024-11-25 13:27:46.568204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.936 [2024-11-25 13:27:46.568220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.936 [2024-11-25 13:27:46.568461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.936 [2024-11-25 13:27:46.568703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.936 [2024-11-25 13:27:46.568722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.936 [2024-11-25 13:27:46.568734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.936 [2024-11-25 13:27:46.568745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.936 [2024-11-25 13:27:46.581078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.936 [2024-11-25 13:27:46.581467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.936 [2024-11-25 13:27:46.581495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:48.936 [2024-11-25 13:27:46.581511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:48.936 [2024-11-25 13:27:46.581750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:48.936 [2024-11-25 13:27:46.581963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.936 [2024-11-25 13:27:46.581982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.936 [2024-11-25 13:27:46.581994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.936 [2024-11-25 13:27:46.582006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.195 [2024-11-25 13:27:46.594492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.195 [2024-11-25 13:27:46.594900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.195 [2024-11-25 13:27:46.594928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.195 [2024-11-25 13:27:46.594959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.195 [2024-11-25 13:27:46.595187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.195 [2024-11-25 13:27:46.595455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.195 [2024-11-25 13:27:46.595482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.195 [2024-11-25 13:27:46.595496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.195 [2024-11-25 13:27:46.595508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.195 [2024-11-25 13:27:46.607813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.195 [2024-11-25 13:27:46.608182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.195 [2024-11-25 13:27:46.608225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.195 [2024-11-25 13:27:46.608241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.195 [2024-11-25 13:27:46.608523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.195 [2024-11-25 13:27:46.608741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.195 [2024-11-25 13:27:46.608760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.195 [2024-11-25 13:27:46.608772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.195 [2024-11-25 13:27:46.608783] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.195 [2024-11-25 13:27:46.621125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.195 [2024-11-25 13:27:46.621525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.621554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.621570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.621810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.622024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.622044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.622056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.622067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.634360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.637466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.637507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.637525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.637772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.637972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.637991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.638004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.638021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.647680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.648027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.648056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.648072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.648301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.648532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.648553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.648566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.648577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.661034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.661345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.661391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.661409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.661637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.661853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.661873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.661885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.661897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.674266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.674648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.674677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.674692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.674920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.675134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.675153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.675165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.675176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.687522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.687921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.687964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.687980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.688220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.688464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.688485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.688497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.688509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.700764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.701106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.701135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.701151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.701401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.701622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.701642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.701655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.701667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.714140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.714482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.714511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.714526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.714759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.714957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.714976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.714989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.715001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.727406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.727776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.727805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.727820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.728054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.728269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.728312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.728328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.728339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.740711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.196 [2024-11-25 13:27:46.741084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.196 [2024-11-25 13:27:46.741112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.196 [2024-11-25 13:27:46.741128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.196 [2024-11-25 13:27:46.741392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.196 [2024-11-25 13:27:46.741600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.196 [2024-11-25 13:27:46.741619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.196 [2024-11-25 13:27:46.741631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.196 [2024-11-25 13:27:46.741658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.196 [2024-11-25 13:27:46.753988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.197 [2024-11-25 13:27:46.754362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.197 [2024-11-25 13:27:46.754391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.197 [2024-11-25 13:27:46.754407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.197 [2024-11-25 13:27:46.754633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.197 [2024-11-25 13:27:46.754847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.197 [2024-11-25 13:27:46.754867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.197 [2024-11-25 13:27:46.754879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.197 [2024-11-25 13:27:46.754890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.197 [2024-11-25 13:27:46.767252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.197 [2024-11-25 13:27:46.767661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.197 [2024-11-25 13:27:46.767690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.197 [2024-11-25 13:27:46.767705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.197 [2024-11-25 13:27:46.767933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.197 [2024-11-25 13:27:46.768146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.197 [2024-11-25 13:27:46.768181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.197 [2024-11-25 13:27:46.768194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.197 [2024-11-25 13:27:46.768205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.197 [2024-11-25 13:27:46.780588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.197 [2024-11-25 13:27:46.781006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.197 [2024-11-25 13:27:46.781034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.197 [2024-11-25 13:27:46.781049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.197 [2024-11-25 13:27:46.781277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.197 [2024-11-25 13:27:46.781519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.197 [2024-11-25 13:27:46.781539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.197 [2024-11-25 13:27:46.781552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.197 [2024-11-25 13:27:46.781564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.197 [2024-11-25 13:27:46.793896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.197 [2024-11-25 13:27:46.794333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.197 [2024-11-25 13:27:46.794362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.197 [2024-11-25 13:27:46.794378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.197 [2024-11-25 13:27:46.794606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.197 [2024-11-25 13:27:46.794821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.197 [2024-11-25 13:27:46.794840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.197 [2024-11-25 13:27:46.794852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.197 [2024-11-25 13:27:46.794863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.197 [2024-11-25 13:27:46.807217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.197 [2024-11-25 13:27:46.807678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.197 [2024-11-25 13:27:46.807722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.197 [2024-11-25 13:27:46.807738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.197 [2024-11-25 13:27:46.807977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.197 [2024-11-25 13:27:46.808192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.197 [2024-11-25 13:27:46.808210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.197 [2024-11-25 13:27:46.808222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.197 [2024-11-25 13:27:46.808243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.197 [2024-11-25 13:27:46.820591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.197 [2024-11-25 13:27:46.821027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.197 [2024-11-25 13:27:46.821055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.197 [2024-11-25 13:27:46.821071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.197 [2024-11-25 13:27:46.821323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.197 [2024-11-25 13:27:46.821529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.197 [2024-11-25 13:27:46.821548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.197 [2024-11-25 13:27:46.821561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.197 [2024-11-25 13:27:46.821572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.197 [2024-11-25 13:27:46.833832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.197 [2024-11-25 13:27:46.834223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.197 [2024-11-25 13:27:46.834251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.197 [2024-11-25 13:27:46.834267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.197 [2024-11-25 13:27:46.834518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.197 [2024-11-25 13:27:46.834733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.197 [2024-11-25 13:27:46.834752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.197 [2024-11-25 13:27:46.834765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.197 [2024-11-25 13:27:46.834777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.197 4464.60 IOPS, 17.44 MiB/s [2024-11-25T12:27:46.856Z] [2024-11-25 13:27:46.847031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.197 [2024-11-25 13:27:46.847439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.197 [2024-11-25 13:27:46.847467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.197 [2024-11-25 13:27:46.847483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.197 [2024-11-25 13:27:46.847725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.197 [2024-11-25 13:27:46.847923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.197 [2024-11-25 13:27:46.847942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.197 [2024-11-25 13:27:46.847953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.197 [2024-11-25 13:27:46.847965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.456 [2024-11-25 13:27:46.860543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.456 [2024-11-25 13:27:46.860941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.456 [2024-11-25 13:27:46.860970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.860985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.861226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.861462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.861483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.861496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.861508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.873872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.874211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.874240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.874255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.874493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.874725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.874744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.874756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.874767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.887110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.887503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.887531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.887547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.887774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.887995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.888015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.888028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.888039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.900411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.900804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.900832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.900848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.901093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.901316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.901336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.901349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.901361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.913683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.914117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.914145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.914160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.914399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.914635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.914654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.914666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.914678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.926994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.927385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.927414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.927430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.927659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.927892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.927912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.927924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.927936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.940197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.940605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.940634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.940649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.940890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.941104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.941128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.941141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.941152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.953482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.953811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.953853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.953869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.954091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.954333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.954354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.954367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.954379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.966786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.967161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.967189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.967205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.967442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.967679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.967698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.967711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.967723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.457 [2024-11-25 13:27:46.980234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.457 [2024-11-25 13:27:46.980610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.457 [2024-11-25 13:27:46.980638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.457 [2024-11-25 13:27:46.980654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.457 [2024-11-25 13:27:46.980882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.457 [2024-11-25 13:27:46.981096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.457 [2024-11-25 13:27:46.981114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.457 [2024-11-25 13:27:46.981126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.457 [2024-11-25 13:27:46.981142] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:46.993491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:46.993898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:46.993925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:46.993955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:46.994183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:46.994408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:46.994427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:46.994439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:46.994451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.006850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:47.007288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:47.007324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:47.007341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:47.007572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:47.007788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:47.007807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:47.007819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:47.007830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.020137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:47.020526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:47.020555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:47.020570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:47.020811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:47.021009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:47.021028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:47.021040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:47.021052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.033364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:47.033750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:47.033777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:47.033793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:47.034014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:47.034227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:47.034246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:47.034258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:47.034269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.046691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:47.047012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:47.047053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:47.047068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:47.047289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:47.047502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:47.047522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:47.047534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:47.047545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.059935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:47.060372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:47.060402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:47.060417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:47.060645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:47.060860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:47.060879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:47.060891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:47.060903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.073246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:47.073686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:47.073715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:47.073730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:47.073966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:47.074180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:47.074199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:47.074211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:47.074222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.086465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:47.086916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:47.086944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:47.086959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:47.087200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:47.087428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:47.087449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:47.087462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:47.087474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.099822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.458 [2024-11-25 13:27:47.100145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.458 [2024-11-25 13:27:47.100172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.458 [2024-11-25 13:27:47.100188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.458 [2024-11-25 13:27:47.100427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.458 [2024-11-25 13:27:47.100673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.458 [2024-11-25 13:27:47.100692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.458 [2024-11-25 13:27:47.100704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.458 [2024-11-25 13:27:47.100716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.458 [2024-11-25 13:27:47.113496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.718 [2024-11-25 13:27:47.113892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.718 [2024-11-25 13:27:47.113921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.718 [2024-11-25 13:27:47.113936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.718 [2024-11-25 13:27:47.114164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.718 [2024-11-25 13:27:47.114405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.718 [2024-11-25 13:27:47.114433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.718 [2024-11-25 13:27:47.114447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.718 [2024-11-25 13:27:47.114460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.718 [2024-11-25 13:27:47.126767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.718 [2024-11-25 13:27:47.127201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.718 [2024-11-25 13:27:47.127230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.718 [2024-11-25 13:27:47.127245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.718 [2024-11-25 13:27:47.127483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.718 [2024-11-25 13:27:47.127701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.718 [2024-11-25 13:27:47.127720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.718 [2024-11-25 13:27:47.127732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.718 [2024-11-25 13:27:47.127743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.718 [2024-11-25 13:27:47.139994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.718 [2024-11-25 13:27:47.140422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.718 [2024-11-25 13:27:47.140450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.718 [2024-11-25 13:27:47.140466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.718 [2024-11-25 13:27:47.140705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.718 [2024-11-25 13:27:47.140904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.718 [2024-11-25 13:27:47.140923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.718 [2024-11-25 13:27:47.140935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.718 [2024-11-25 13:27:47.140946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.718 [2024-11-25 13:27:47.153258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.718 [2024-11-25 13:27:47.153656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.718 [2024-11-25 13:27:47.153684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.718 [2024-11-25 13:27:47.153700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.718 [2024-11-25 13:27:47.153941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.718 [2024-11-25 13:27:47.154153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.718 [2024-11-25 13:27:47.154172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.718 [2024-11-25 13:27:47.154185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.718 [2024-11-25 13:27:47.154201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.718 [2024-11-25 13:27:47.166432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.718 [2024-11-25 13:27:47.166882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.718 [2024-11-25 13:27:47.166911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.718 [2024-11-25 13:27:47.166927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.718 [2024-11-25 13:27:47.167167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.718 [2024-11-25 13:27:47.167395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.718 [2024-11-25 13:27:47.167416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.718 [2024-11-25 13:27:47.167428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.718 [2024-11-25 13:27:47.167440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.718 [2024-11-25 13:27:47.179779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.718 [2024-11-25 13:27:47.180120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.718 [2024-11-25 13:27:47.180147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.718 [2024-11-25 13:27:47.180162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.180427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.180646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.180666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.180677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.180689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.192997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.193382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.193412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.193427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.193655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.193870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.193889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.193901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.193912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.206257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.206597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.206631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.206648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.206886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.207101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.207120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.207133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.207144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.219521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.219865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.219893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.219908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.220114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.220388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.220409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.220422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.220435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.232794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.233164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.233192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.233207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.233446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.233678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.233697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.233709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.233720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.246086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.246531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.246561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.246576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.246810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.247024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.247043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.247055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.247066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.259301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.259756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.259785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.259800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.260041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.260239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.260258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.260270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.260281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.272644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.273016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.273059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.273074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.273335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.273547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.273588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.273601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.273612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.285990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.286368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.286396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.286412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.286640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.286855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.286879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.286891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.286903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.299238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.299633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.299676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.299692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.719 [2024-11-25 13:27:47.299945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.719 [2024-11-25 13:27:47.300158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.719 [2024-11-25 13:27:47.300177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.719 [2024-11-25 13:27:47.300189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.719 [2024-11-25 13:27:47.300201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.719 [2024-11-25 13:27:47.312450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.719 [2024-11-25 13:27:47.312844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.719 [2024-11-25 13:27:47.312887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.719 [2024-11-25 13:27:47.312902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.720 [2024-11-25 13:27:47.313173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.720 [2024-11-25 13:27:47.313416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.720 [2024-11-25 13:27:47.313438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.720 [2024-11-25 13:27:47.313450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.720 [2024-11-25 13:27:47.313462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.720 [2024-11-25 13:27:47.325643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.720 [2024-11-25 13:27:47.325984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.720 [2024-11-25 13:27:47.326012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.720 [2024-11-25 13:27:47.326028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.720 [2024-11-25 13:27:47.326241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.720 [2024-11-25 13:27:47.326478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.720 [2024-11-25 13:27:47.326502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.720 [2024-11-25 13:27:47.326515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.720 [2024-11-25 13:27:47.326532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.720 [2024-11-25 13:27:47.339036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.720 [2024-11-25 13:27:47.339371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.720 [2024-11-25 13:27:47.339401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.720 [2024-11-25 13:27:47.339416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.720 [2024-11-25 13:27:47.339645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.720 [2024-11-25 13:27:47.339865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.720 [2024-11-25 13:27:47.339884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.720 [2024-11-25 13:27:47.339911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.720 [2024-11-25 13:27:47.339922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.720 [2024-11-25 13:27:47.352430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.720 [2024-11-25 13:27:47.352789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.720 [2024-11-25 13:27:47.352817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.720 [2024-11-25 13:27:47.352833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.720 [2024-11-25 13:27:47.353060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.720 [2024-11-25 13:27:47.353274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.720 [2024-11-25 13:27:47.353319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.720 [2024-11-25 13:27:47.353334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.720 [2024-11-25 13:27:47.353347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.720 [2024-11-25 13:27:47.365702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.720 [2024-11-25 13:27:47.366072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.720 [2024-11-25 13:27:47.366115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.720 [2024-11-25 13:27:47.366131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.720 [2024-11-25 13:27:47.366382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.720 [2024-11-25 13:27:47.366593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.720 [2024-11-25 13:27:47.366613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.720 [2024-11-25 13:27:47.366641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.720 [2024-11-25 13:27:47.366652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.979 [2024-11-25 13:27:47.378953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.979 [2024-11-25 13:27:47.379294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.979 [2024-11-25 13:27:47.379334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.979 [2024-11-25 13:27:47.379351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.979 [2024-11-25 13:27:47.379564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.979 [2024-11-25 13:27:47.379782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.979 [2024-11-25 13:27:47.379817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.979 [2024-11-25 13:27:47.379829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.980 [2024-11-25 13:27:47.379841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.980 [2024-11-25 13:27:47.392225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.980 [2024-11-25 13:27:47.392677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-11-25 13:27:47.392704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.980 [2024-11-25 13:27:47.392734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.980 [2024-11-25 13:27:47.392978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.980 [2024-11-25 13:27:47.393176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.980 [2024-11-25 13:27:47.393195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.980 [2024-11-25 13:27:47.393207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.980 [2024-11-25 13:27:47.393218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.980 [2024-11-25 13:27:47.405455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.980 [2024-11-25 13:27:47.405812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-11-25 13:27:47.405840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.980 [2024-11-25 13:27:47.405855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.980 [2024-11-25 13:27:47.406082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.980 [2024-11-25 13:27:47.406321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.980 [2024-11-25 13:27:47.406350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.980 [2024-11-25 13:27:47.406363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.980 [2024-11-25 13:27:47.406374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.980 [2024-11-25 13:27:47.418715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.980 [2024-11-25 13:27:47.419086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-11-25 13:27:47.419115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.980 [2024-11-25 13:27:47.419131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.980 [2024-11-25 13:27:47.419375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.980 [2024-11-25 13:27:47.419597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.980 [2024-11-25 13:27:47.419616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.980 [2024-11-25 13:27:47.419629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.980 [2024-11-25 13:27:47.419654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3282753 Killed "${NVMF_APP[@]}" "$@" 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3283824 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3283824 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3283824 ']' 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.980 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:49.980 [2024-11-25 13:27:47.432095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.980 [2024-11-25 13:27:47.432474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-11-25 13:27:47.432503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.980 [2024-11-25 13:27:47.432519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.980 [2024-11-25 13:27:47.432762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.980 [2024-11-25 13:27:47.432954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.980 [2024-11-25 13:27:47.432973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.980 [2024-11-25 13:27:47.432984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.980 [2024-11-25 13:27:47.432996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.980 [2024-11-25 13:27:47.445250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.980 [2024-11-25 13:27:47.445656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-11-25 13:27:47.445682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.980 [2024-11-25 13:27:47.445705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.980 [2024-11-25 13:27:47.445928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.980 [2024-11-25 13:27:47.446136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.980 [2024-11-25 13:27:47.446154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.980 [2024-11-25 13:27:47.446166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.980 [2024-11-25 13:27:47.446177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.980 [2024-11-25 13:27:47.458554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.980 [2024-11-25 13:27:47.458929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-11-25 13:27:47.458957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.980 [2024-11-25 13:27:47.458972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.980 [2024-11-25 13:27:47.459209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.980 [2024-11-25 13:27:47.459449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.980 [2024-11-25 13:27:47.459470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.980 [2024-11-25 13:27:47.459484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.980 [2024-11-25 13:27:47.459496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.980 [2024-11-25 13:27:47.471681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.980 [2024-11-25 13:27:47.472074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-11-25 13:27:47.472101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.980 [2024-11-25 13:27:47.472116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.980 [2024-11-25 13:27:47.472365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.980 [2024-11-25 13:27:47.472593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.980 [2024-11-25 13:27:47.472612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.980 [2024-11-25 13:27:47.472623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.980 [2024-11-25 13:27:47.472634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.980 [2024-11-25 13:27:47.480442] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:28:49.980 [2024-11-25 13:27:47.480521] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.980 [2024-11-25 13:27:47.484880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.980 [2024-11-25 13:27:47.485313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.980 [2024-11-25 13:27:47.485342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.980 [2024-11-25 13:27:47.485363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.980 [2024-11-25 13:27:47.485605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.980 [2024-11-25 13:27:47.485814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.980 [2024-11-25 13:27:47.485833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.980 [2024-11-25 13:27:47.485845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.485856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.498052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.498452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.498479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.498494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.498746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.498939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.498957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.498969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.498980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.511271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.511662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.511707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.511722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.511974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.512181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.512200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.512211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.512223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.524622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.524960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.524989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.525005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.525232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.525482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.525503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.525516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.525527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.537797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.538168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.538211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.538226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.538478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.538698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.538717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.538729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.538741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.551088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.551464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.551493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.551509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.551735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.551949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.551967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.551979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.551990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.554081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:49.981 [2024-11-25 13:27:47.564452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.564943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.564980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.564998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.565245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.565497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.565531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.565546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.565561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.577848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.578265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.578319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.578339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.578585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.578819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.578839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.578852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.578864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.591166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.591570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.591598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.591613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.591855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.592053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.592072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.592086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.592098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.604513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.981 [2024-11-25 13:27:47.604959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.981 [2024-11-25 13:27:47.604987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.981 [2024-11-25 13:27:47.605004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.981 [2024-11-25 13:27:47.605249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.981 [2024-11-25 13:27:47.605478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.981 [2024-11-25 13:27:47.605499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.981 [2024-11-25 13:27:47.605512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.981 [2024-11-25 13:27:47.605535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.981 [2024-11-25 13:27:47.611116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.981 [2024-11-25 13:27:47.611147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.981 [2024-11-25 13:27:47.611174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.981 [2024-11-25 13:27:47.611185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.981 [2024-11-25 13:27:47.611194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.981 [2024-11-25 13:27:47.612578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.981 [2024-11-25 13:27:47.612646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.981 [2024-11-25 13:27:47.612650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.981 [2024-11-25 13:27:47.617947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.982 [2024-11-25 13:27:47.618372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-11-25 13:27:47.618405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.982 [2024-11-25 13:27:47.618422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.982 [2024-11-25 13:27:47.618658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.982 [2024-11-25 13:27:47.618871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.982 [2024-11-25 13:27:47.618892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.982 [2024-11-25 13:27:47.618906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.982 [2024-11-25 13:27:47.618920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.982 [2024-11-25 13:27:47.631422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.982 [2024-11-25 13:27:47.631911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.982 [2024-11-25 13:27:47.631948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:49.982 [2024-11-25 13:27:47.631967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:49.982 [2024-11-25 13:27:47.632205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:49.982 [2024-11-25 13:27:47.632453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.982 [2024-11-25 13:27:47.632476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.982 [2024-11-25 13:27:47.632492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.982 [2024-11-25 13:27:47.632507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.241 [2024-11-25 13:27:47.644991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.241 [2024-11-25 13:27:47.645473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.241 [2024-11-25 13:27:47.645510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.241 [2024-11-25 13:27:47.645529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.241 [2024-11-25 13:27:47.645777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.241 [2024-11-25 13:27:47.645993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.241 [2024-11-25 13:27:47.646014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.241 [2024-11-25 13:27:47.646029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.241 [2024-11-25 13:27:47.646044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.241 [2024-11-25 13:27:47.658637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.241 [2024-11-25 13:27:47.659117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.241 [2024-11-25 13:27:47.659154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.241 [2024-11-25 13:27:47.659174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.241 [2024-11-25 13:27:47.659404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.241 [2024-11-25 13:27:47.659628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.241 [2024-11-25 13:27:47.659664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.241 [2024-11-25 13:27:47.659679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.241 [2024-11-25 13:27:47.659694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.241 [2024-11-25 13:27:47.672104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.241 [2024-11-25 13:27:47.672554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.241 [2024-11-25 13:27:47.672589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.241 [2024-11-25 13:27:47.672607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.241 [2024-11-25 13:27:47.672841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.241 [2024-11-25 13:27:47.673055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.241 [2024-11-25 13:27:47.673076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.241 [2024-11-25 13:27:47.673091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.241 [2024-11-25 13:27:47.673106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.241 [2024-11-25 13:27:47.685739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.241 [2024-11-25 13:27:47.686247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.241 [2024-11-25 13:27:47.686286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.241 [2024-11-25 13:27:47.686315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.241 [2024-11-25 13:27:47.686545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.241 [2024-11-25 13:27:47.686782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.241 [2024-11-25 13:27:47.686803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.241 [2024-11-25 13:27:47.686828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.241 [2024-11-25 13:27:47.686845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.241 [2024-11-25 13:27:47.699249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.241 [2024-11-25 13:27:47.699589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.241 [2024-11-25 13:27:47.699618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.241 [2024-11-25 13:27:47.699634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.241 [2024-11-25 13:27:47.699848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.241 [2024-11-25 13:27:47.700075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.241 [2024-11-25 13:27:47.700095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.241 [2024-11-25 13:27:47.700109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.241 [2024-11-25 13:27:47.700121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.241 [2024-11-25 13:27:47.712729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.241 [2024-11-25 13:27:47.713086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.241 [2024-11-25 13:27:47.713115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.241 [2024-11-25 13:27:47.713131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.241 [2024-11-25 13:27:47.713356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.241 [2024-11-25 13:27:47.713575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.241 [2024-11-25 13:27:47.713596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.241 [2024-11-25 13:27:47.713610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.241 [2024-11-25 13:27:47.713623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.241 [2024-11-25 13:27:47.726282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.241 [2024-11-25 13:27:47.726619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.241 [2024-11-25 13:27:47.726648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.241 [2024-11-25 13:27:47.726664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.241 [2024-11-25 13:27:47.726879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.241 [2024-11-25 13:27:47.727097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.241 [2024-11-25 13:27:47.727118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.241 [2024-11-25 13:27:47.727131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.241 [2024-11-25 13:27:47.727145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.241 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.241 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.242 [2024-11-25 13:27:47.739730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.242 [2024-11-25 13:27:47.740070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.242 [2024-11-25 13:27:47.740098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.242 [2024-11-25 13:27:47.740115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.242 [2024-11-25 13:27:47.740340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.242 [2024-11-25 13:27:47.740558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.242 [2024-11-25 13:27:47.740579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.242 [2024-11-25 13:27:47.740592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.242 [2024-11-25 13:27:47.740605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.242 [2024-11-25 13:27:47.753153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.242 [2024-11-25 13:27:47.753496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.242 [2024-11-25 13:27:47.753524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.242 [2024-11-25 13:27:47.753540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.242 [2024-11-25 13:27:47.753768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.242 [2024-11-25 13:27:47.753979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.242 [2024-11-25 13:27:47.754000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.242 [2024-11-25 13:27:47.754012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.242 [2024-11-25 13:27:47.754024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.242 [2024-11-25 13:27:47.763838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.242 [2024-11-25 13:27:47.766686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.242 [2024-11-25 13:27:47.767034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.242 [2024-11-25 13:27:47.767063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.242 [2024-11-25 13:27:47.767079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.242 [2024-11-25 13:27:47.767297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.242 [2024-11-25 13:27:47.767525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.242 [2024-11-25 13:27:47.767547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.242 [2024-11-25 13:27:47.767561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.242 [2024-11-25 13:27:47.767573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.242 [2024-11-25 13:27:47.780229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.242 [2024-11-25 13:27:47.780687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.242 [2024-11-25 13:27:47.780722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.242 [2024-11-25 13:27:47.780741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.242 [2024-11-25 13:27:47.780975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.242 [2024-11-25 13:27:47.781188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.242 [2024-11-25 13:27:47.781210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.242 [2024-11-25 13:27:47.781225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.242 [2024-11-25 13:27:47.781240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.242 [2024-11-25 13:27:47.793660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.242 [2024-11-25 13:27:47.794016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.242 [2024-11-25 13:27:47.794045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.242 [2024-11-25 13:27:47.794062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.242 [2024-11-25 13:27:47.794276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.242 [2024-11-25 13:27:47.794534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.242 [2024-11-25 13:27:47.794556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.242 [2024-11-25 13:27:47.794570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.242 [2024-11-25 13:27:47.794583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.242 Malloc0 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.242 [2024-11-25 13:27:47.807258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.242 [2024-11-25 13:27:47.807682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.242 [2024-11-25 13:27:47.807713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.242 [2024-11-25 13:27:47.807729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.242 [2024-11-25 13:27:47.807960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.242 [2024-11-25 13:27:47.808173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.242 [2024-11-25 13:27:47.808193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.242 [2024-11-25 13:27:47.808207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.242 [2024-11-25 13:27:47.808220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.242 [2024-11-25 13:27:47.820895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.242 [2024-11-25 13:27:47.821257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.242 [2024-11-25 13:27:47.821286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1941ff0 with addr=10.0.0.2, port=4420 00:28:50.242 [2024-11-25 13:27:47.821310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941ff0 is same with the state(6) to be set 00:28:50.242 [2024-11-25 13:27:47.821526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1941ff0 (9): Bad file descriptor 00:28:50.242 [2024-11-25 13:27:47.821755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.242 [2024-11-25 13:27:47.821760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.242 [2024-11-25 13:27:47.821775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.242 [2024-11-25 13:27:47.821792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.242 [2024-11-25 13:27:47.821804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.242 13:27:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3283042 00:28:50.242 [2024-11-25 13:27:47.834403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.242 3720.50 IOPS, 14.53 MiB/s [2024-11-25T12:27:47.901Z] [2024-11-25 13:27:47.862780] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:52.549 4370.86 IOPS, 17.07 MiB/s [2024-11-25T12:27:51.140Z] 4906.75 IOPS, 19.17 MiB/s [2024-11-25T12:27:52.073Z] 5294.67 IOPS, 20.68 MiB/s [2024-11-25T12:27:53.006Z] 5619.90 IOPS, 21.95 MiB/s [2024-11-25T12:27:53.940Z] 5890.82 IOPS, 23.01 MiB/s [2024-11-25T12:27:54.873Z] 6111.50 IOPS, 23.87 MiB/s [2024-11-25T12:27:56.247Z] 6303.08 IOPS, 24.62 MiB/s [2024-11-25T12:27:57.180Z] 6463.79 IOPS, 25.25 MiB/s [2024-11-25T12:27:57.180Z] 6602.33 IOPS, 25.79 MiB/s 00:28:59.521 Latency(us) 00:28:59.521 [2024-11-25T12:27:57.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.521 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.521 Verification LBA range: start 0x0 length 0x4000 00:28:59.521 Nvme1n1 : 15.01 6605.56 25.80 10085.63 0.00 7646.21 561.30 18544.26 00:28:59.521 [2024-11-25T12:27:57.180Z] =================================================================================================================== 00:28:59.521 [2024-11-25T12:27:57.180Z] Total : 6605.56 25.80 10085.63 0.00 7646.21 561.30 18544.26 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.521 rmmod nvme_tcp 00:28:59.521 rmmod nvme_fabrics 00:28:59.521 rmmod nvme_keyring 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3283824 ']' 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3283824 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3283824 ']' 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3283824 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.521 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3283824 00:28:59.779 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.780 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.780 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3283824' 00:28:59.780 killing process with pid 3283824 00:28:59.780 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3283824 00:28:59.780 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3283824 00:29:00.038 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:00.038 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:00.038 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:00.038 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:00.038 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:00.038 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:00.038 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:00.038 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.039 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.039 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.039 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.039 13:27:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.944 13:27:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:01.944 00:29:01.944 real 0m22.747s 00:29:01.944 user 1m0.632s 00:29:01.944 sys 0m4.335s 00:29:01.944 13:27:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.944 13:27:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.944 ************************************ 00:29:01.944 END TEST nvmf_bdevperf 00:29:01.944 ************************************ 00:29:01.944 13:27:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:01.944 13:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:01.944 13:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.944 13:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.944 ************************************ 00:29:01.944 START TEST nvmf_target_disconnect 00:29:01.944 ************************************ 00:29:01.944 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:02.202 * Looking for test storage... 00:29:02.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:02.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.202 --rc genhtml_branch_coverage=1 00:29:02.202 --rc genhtml_function_coverage=1 00:29:02.202 --rc genhtml_legend=1 00:29:02.202 --rc geninfo_all_blocks=1 00:29:02.202 --rc geninfo_unexecuted_blocks=1 00:29:02.202 00:29:02.202 ' 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:02.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.202 --rc genhtml_branch_coverage=1 00:29:02.202 --rc genhtml_function_coverage=1 00:29:02.202 --rc genhtml_legend=1 00:29:02.202 --rc geninfo_all_blocks=1 00:29:02.202 --rc geninfo_unexecuted_blocks=1 00:29:02.202 00:29:02.202 ' 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:02.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.202 --rc genhtml_branch_coverage=1 00:29:02.202 --rc genhtml_function_coverage=1 00:29:02.202 --rc genhtml_legend=1 00:29:02.202 --rc geninfo_all_blocks=1 00:29:02.202 --rc geninfo_unexecuted_blocks=1 00:29:02.202 00:29:02.202 ' 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:02.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.202 --rc genhtml_branch_coverage=1 00:29:02.202 --rc genhtml_function_coverage=1 00:29:02.202 --rc genhtml_legend=1 00:29:02.202 --rc geninfo_all_blocks=1 00:29:02.202 --rc geninfo_unexecuted_blocks=1 00:29:02.202 00:29:02.202 ' 00:29:02.202 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.203 13:27:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:04.729 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:04.729 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.729 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:04.730 Found net devices under 0000:09:00.0: cvl_0_0 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:04.730 Found net devices under 0000:09:00.1: cvl_0_1 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.730 13:28:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:29:04.730 00:29:04.730 --- 10.0.0.2 ping statistics --- 00:29:04.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.730 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:29:04.730 00:29:04.730 --- 10.0.0.1 ping statistics --- 00:29:04.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.730 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.730 ************************************ 00:29:04.730 START TEST nvmf_target_disconnect_tc1 00:29:04.730 ************************************ 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.730 [2024-11-25 13:28:02.206246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.730 [2024-11-25 13:28:02.206355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x618f40 with addr=10.0.0.2, port=4420 00:29:04.730 [2024-11-25 13:28:02.206391] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:04.730 [2024-11-25 13:28:02.206410] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:04.730 [2024-11-25 13:28:02.206422] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:04.730 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:04.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:04.730 Initializing NVMe Controllers 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.730 00:29:04.730 real 0m0.098s 00:29:04.730 user 0m0.048s 00:29:04.730 sys 0m0.049s 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.730 ************************************ 00:29:04.730 END TEST nvmf_target_disconnect_tc1 00:29:04.730 ************************************ 00:29:04.730 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.731 ************************************ 00:29:04.731 START TEST nvmf_target_disconnect_tc2 00:29:04.731 ************************************ 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3286982 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3286982 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3286982 ']' 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.731 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.731 [2024-11-25 13:28:02.317431] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:29:04.731 [2024-11-25 13:28:02.317519] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.988 [2024-11-25 13:28:02.386628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.988 [2024-11-25 13:28:02.443935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.988 [2024-11-25 13:28:02.443982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.988 [2024-11-25 13:28:02.444010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.988 [2024-11-25 13:28:02.444021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.988 [2024-11-25 13:28:02.444030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.988 [2024-11-25 13:28:02.445535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:04.988 [2024-11-25 13:28:02.445599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:04.988 [2024-11-25 13:28:02.445645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:04.988 [2024-11-25 13:28:02.445648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.988 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.988 Malloc0 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.989 [2024-11-25 13:28:02.615975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.989 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.989 [2024-11-25 13:28:02.644241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.246 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.246 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:05.246 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.246 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.246 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.246 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3287012 00:29:05.246 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.246 13:28:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:07.163 13:28:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3286982 00:29:07.163 13:28:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Write completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.163 Read completed with error (sct=0, sc=8) 00:29:07.163 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 [2024-11-25 13:28:04.669961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 [2024-11-25 13:28:04.670314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 [2024-11-25 13:28:04.670630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Read completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 Write completed with error (sct=0, sc=8) 00:29:07.164 starting I/O failed 00:29:07.164 [2024-11-25 13:28:04.670949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.164 [2024-11-25 13:28:04.671088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.164 [2024-11-25 13:28:04.671139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.164 qpair failed and we were unable to recover it. 00:29:07.164 [2024-11-25 13:28:04.671297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.164 [2024-11-25 13:28:04.671354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.164 qpair failed and we were unable to recover it. 00:29:07.164 [2024-11-25 13:28:04.671456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.164 [2024-11-25 13:28:04.671483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.164 qpair failed and we were unable to recover it. 00:29:07.164 [2024-11-25 13:28:04.671613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.671640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.671769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.671796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.671894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.671934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.672067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.672096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.672184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.672210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.672328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.672362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.672483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.672510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.672600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.672627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.672726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.672754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.672877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.672903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.673029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.673056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.673151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.673179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.673265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.673292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.673407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.673434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.673517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.673544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.673652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.673679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.673767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.673793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.673934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.673961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.674051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.674079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.674167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.674195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.674316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.674344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.674454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.674508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.674631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.674671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.674798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.674826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.674917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.674944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.675064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.675090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.675208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.675236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.675325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.675352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.675466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.675492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.675604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.675630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.675744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.675771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.675850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.675877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.675959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.675985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.676095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.676122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.676214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.676243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.676348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.676377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.676499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.165 [2024-11-25 13:28:04.676526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.165 qpair failed and we were unable to recover it. 00:29:07.165 [2024-11-25 13:28:04.676613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.676639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.676726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.676754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.676846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.676875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.676974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.677083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.677198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.677338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.677449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.677561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.677707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.677825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.677937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.677964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.678085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.678111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.678245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.678284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.678378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.678406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.678524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.678550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.678667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.678693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.678791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.678817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.678915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.678941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.679025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.679052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.679155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.679194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.679333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.679374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.679498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.679526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.679666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.679691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.679830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.679860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.679974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.679999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.680090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.680116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.680223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.680262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.680437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.680476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.680578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.680606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.680717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.680744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.680828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.680854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.681034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.681088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.681206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.681231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.681362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.681401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.681525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.681556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.681646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.681674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.681791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.166 [2024-11-25 13:28:04.681819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.166 qpair failed and we were unable to recover it. 00:29:07.166 [2024-11-25 13:28:04.681956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.682011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.682168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.682208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.682356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.682385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.682499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.682525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.682643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.682669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.682789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.682816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.682904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.682930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.683046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.683072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.683193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.683220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.683334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.683360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.683473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.683500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.683615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.683641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.683732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.683758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.683872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.683906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.684000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.684027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.684141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.684167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.684256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.684282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.684409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.684436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.684528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.684554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.684638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.684664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.684754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.684781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.684862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.684891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.685006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.685032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.685145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.685171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.685269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.685317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.685429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.685457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.685546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.685574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.685668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.685694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.685786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.685812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.685906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.685933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.686051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.686077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.686186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.686212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.686296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.686327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.686412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.686438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.686552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.686579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.686722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.686748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.686858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.686884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.686991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.167 [2024-11-25 13:28:04.687017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.167 qpair failed and we were unable to recover it. 00:29:07.167 [2024-11-25 13:28:04.687122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.687148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.687275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.687301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.687408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.687434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.687521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.687546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.687642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.687669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.687751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.687778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.687870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.687897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.688014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.688040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.688126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.688152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.688248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.688287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.688414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.688453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.688589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.688627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.688749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.688777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.688889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.688914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.689006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.689032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.689145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.689181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.689293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.689326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.689412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.689437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.689547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.689573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.689654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.689680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.689768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.689793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.689913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.689941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.690031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.690057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.690143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.690169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.690281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.690312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.690427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.690453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.690531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.690558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.690674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.690701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.690808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.690834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.690925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.690953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.691072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.691097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.691203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.691243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.168 [2024-11-25 13:28:04.691341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.168 [2024-11-25 13:28:04.691370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.168 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.691490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.691517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.691655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.691681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.691763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.691789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.691932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.691957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.692073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.692100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.692203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.692244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.692342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.692370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.692462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.692488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.692614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.692640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.692722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.692753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.692840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.692866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.692979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.693005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.693147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.693173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.693258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.693283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.693370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.693396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.693506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.693532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.693615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.693640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.693748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.693774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.693888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.693913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.694031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.694059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.694211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.694239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.694373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.694413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.694510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.694537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.694686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.694712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.694828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.694854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.695077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.695140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.695278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.695311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.695412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.695438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.695552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.695577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.695664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.695689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.695850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.695902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.696034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.696060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.696189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.696228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.696337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.696377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.696508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.696536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.696648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.696675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.696760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.696788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.169 qpair failed and we were unable to recover it. 00:29:07.169 [2024-11-25 13:28:04.696906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.169 [2024-11-25 13:28:04.696932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.697038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.697064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.697194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.697233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.697365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.697394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.697513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.697539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.697628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.697657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.697749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.697775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.697896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.697922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.698063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.698090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.698220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.698259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.698416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.698444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.698543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.698571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.698689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.698720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.698833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.698859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.698944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.698970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.699074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.699114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.699278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.699324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.699451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.699480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.699571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.699597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.699680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.699706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.699801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.699827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.699967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.699993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.700085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.700113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.700233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.700260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.700362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.700389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.700505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.700532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.700648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.700674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.700790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.700816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.700929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.700954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.701050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.701079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.701197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.701225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.701358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.701398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.701523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.701551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.701646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.701673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.701783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.701810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.701904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.701931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.702074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.702114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.702243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.702271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.170 [2024-11-25 13:28:04.702423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.170 [2024-11-25 13:28:04.702450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.170 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.702538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.702570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.702657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.702683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.702798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.702825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.702907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.702933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.703023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.703051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.703139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.703167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.703322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.703350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.703468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.703494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.703584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.703610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.703695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.703722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.703802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.703828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.703965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.704026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.704170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.704195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.704301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.704339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.704455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.704481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.704594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.704621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.704713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.704739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.704853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.704879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.704987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.705012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.705131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.705157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.705268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.705294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.705398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.705437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.705536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.705563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.705680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.705707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.705795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.705821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.705915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.705941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.706032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.706072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.706171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.706205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.706331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.706370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.706484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.706512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.706604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.706632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.706721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.706746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.706835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.706863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.706971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.706998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.707078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.707105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.707185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.707211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.707331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.707359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.171 [2024-11-25 13:28:04.707443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.171 [2024-11-25 13:28:04.707469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.171 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.707557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.707583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.707662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.707689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.707775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.707801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.707952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.707978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.708064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.708092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.708207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.708233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.708348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.708375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.708490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.708516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.708600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.708625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.708744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.708770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.708946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.709007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.709120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.709147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.709281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.709315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.709432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.709458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.709570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.709596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.709701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.709726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.709852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.709880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.709980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.710006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.710088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.710114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.710239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.710266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.710366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.710394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.710537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.710563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.710678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.710705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.710787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.710813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.710928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.710953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.711066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.711091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.711193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.711233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.711383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.711412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.711502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.711529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.711621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.711657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.711784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.711810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.711930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.711956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.712068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.712095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.712180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.712206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.172 qpair failed and we were unable to recover it. 00:29:07.172 [2024-11-25 13:28:04.712325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.172 [2024-11-25 13:28:04.712355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.712503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.712531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.712670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.712695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.712809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.712835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.712916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.712943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.713082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.713108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.713216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.713243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.713353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.713382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.713473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.713499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.713645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.713672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.713784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.713811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.713900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.713926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.714033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.714059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.714164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.714190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.714274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.714300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.714425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.714454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.714555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.714582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.714712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.714738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.714844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.714870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.714957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.714982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.715091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.715117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.715208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.715235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.715367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.715396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.715484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.715510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.715608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.715635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.715746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.715772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.715867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.715895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.716016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.716043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.716160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.716188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.716325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.716352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.716478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.716505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.716650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.716676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.716764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.716791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.716960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.717014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.717133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.717160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.717290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.717344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.173 [2024-11-25 13:28:04.717468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.173 [2024-11-25 13:28:04.717496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.173 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.717585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.717611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.717697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.717725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.717811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.717838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.717950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.717977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.718094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.718121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.718213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.718252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.718352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.718381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.718489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.718515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.718656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.718682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.718773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.718799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.718947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.718991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.719125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.719151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.719315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.719355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.719461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.719500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.719628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.719657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.719772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.719799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.719927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.719953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.720073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.720100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.720215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.720241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.720383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.720410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.720528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.720557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.720757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.720819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.720978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.721033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.721153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.721180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.721269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.721296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.721425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.721452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.721568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.721594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.721688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.721716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.721808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.721835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.721952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.721978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.722098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.722124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.722268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.722295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.722418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.722446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.722577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.722615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.722767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.722795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.722936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.722963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.723105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.174 [2024-11-25 13:28:04.723131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.174 qpair failed and we were unable to recover it. 00:29:07.174 [2024-11-25 13:28:04.723270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.723297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.723413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.723445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.723562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.723589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.723704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.723730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.723850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.723878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.724007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.724061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.724179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.724206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.724322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.724349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.724438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.724464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.724557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.724583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.724711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.724738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.724844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.724871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.724988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.725015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.725132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.725158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.725237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.725263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.725375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.725414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.725560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.725588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.725671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.725698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.725816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.725842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.725933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.725959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.726051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.726077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.726195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.726223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.726318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.726345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.726459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.726486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.726564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.726591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.726699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.726725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.726837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.726864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.726982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.727009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.727128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.727155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.727239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.727266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.727418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.727445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.727534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.727562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.727663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.727690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.727778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.727805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.727951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.727977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.728070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.728097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.728188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.728214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.175 [2024-11-25 13:28:04.728298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.175 [2024-11-25 13:28:04.728331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.175 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.728445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.728471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.728556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.728582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.728724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.728750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.728828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.728859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.728971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.728997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.729090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.729118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.729233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.729259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.729389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.729416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.729528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.729555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.729641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.729667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.729787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.729813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.729903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.729930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.730022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.730049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.730186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.730212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.730307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.730334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.730451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.730477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.730557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.730584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.730665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.730692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.730769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.730795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.730888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.730914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.731057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.731083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.731192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.731218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.731310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.731338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.731452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.731480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.731599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.731626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.731745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.731772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.731849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.731875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.731969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.731995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.732110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.732136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.732249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.732275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.732406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.732445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.732539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.732566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.732708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.732735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.732846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.732873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.732980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.733006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.733140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.733167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.733251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.733279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.733381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.176 [2024-11-25 13:28:04.733408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.176 qpair failed and we were unable to recover it. 00:29:07.176 [2024-11-25 13:28:04.733516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.733542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.733649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.733676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.733797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.733824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.733917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.733944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.734093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.734132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.734252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.734286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.734411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.734438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.734553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.734579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.734691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.734717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.734831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.734858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.734996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.735022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.735102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.735128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.735227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.735266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.735398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.735426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.735518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.735545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.735664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.735691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.735778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.735805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.735926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.735978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.736119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.736145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.736235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.736262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.736401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.736441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.736539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.736566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.736679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.736705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.736812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.736839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.736979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.737030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.737144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.737170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.737266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.737292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.737415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.737441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.737534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.737560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.737695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.737721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.737831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.737857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.737937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.737963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.177 qpair failed and we were unable to recover it. 00:29:07.177 [2024-11-25 13:28:04.738075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.177 [2024-11-25 13:28:04.738115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.738243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.738282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.738405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.738444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.738546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.738572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.738690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.738715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.738798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.738823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.739001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.739053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.739169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.739194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.739276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.739307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.739397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.739422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.739538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.739564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.739677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.739702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.739790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.739815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.739954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.739979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.740076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.740106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.740220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.740246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.740329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.740356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.740452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.740478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.740593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.740619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.740731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.740757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.740901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.740927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.741053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.741092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.741194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.741222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.741339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.741367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.741484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.741509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.741597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.741622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.741709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.741734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.741855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.741882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.741974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.742000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.742107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.742133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.742218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.742243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.742356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.742382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.742491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.742517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.742661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.742686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.742793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.742819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.742934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.742961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.743080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.743106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.743197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.743222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.178 qpair failed and we were unable to recover it. 00:29:07.178 [2024-11-25 13:28:04.743364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.178 [2024-11-25 13:28:04.743390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.743527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.743555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.743669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.743700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.743823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.743849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.743962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.743987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.744072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.744097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.744247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.744287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.744386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.744414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.744527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.744553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.744670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.744696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.744787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.744814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.744941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.744981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.745078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.745104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.745261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.745300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.745403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.745431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.745544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.745571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.745696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.745722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.745805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.745832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.745912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.745939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.746044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.746070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.746190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.746215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.746337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.746364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.746477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.746503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.746654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.746680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.746796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.746822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.746904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.746930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.747014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.747042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.747156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.747183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.747307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.747347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.747471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.747504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.747593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.747620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.747763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.747788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.747900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.747926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.748067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.748207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.748232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.748329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.748363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.179 [2024-11-25 13:28:04.748476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.179 [2024-11-25 13:28:04.748502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.179 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.748588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.748614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.748695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.748721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.748835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.748862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.748975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.749002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.749086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.749112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.749228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.749254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.749380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.749406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.749498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.749524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.749643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.749669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.749784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.749809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.749918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.749944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.750036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.750062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.750144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.750170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.750278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.750308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.750389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.750415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.750525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.750551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.750662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.750689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.750783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.750810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.750927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.750953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.751071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.751098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.751226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.751266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.751375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.751405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.751530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.751557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.751698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.751725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.751813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.751839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.751962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.751989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.752079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.752106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.752222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.752248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.752390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.752417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.752535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.752561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.752649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.752675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.752793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.752820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.752933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.752963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.753076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.753102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.753230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.180 [2024-11-25 13:28:04.753256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.180 qpair failed and we were unable to recover it. 00:29:07.180 [2024-11-25 13:28:04.753374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.753403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.753509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.753549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.753645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.753674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.753789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.753816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.754014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.754040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.754123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.754150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.754232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.754258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.754382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.754410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.754533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.754559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.754671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.754697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.754811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.754837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.754940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.754967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.755070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.755109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.755221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.755249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.755405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.755445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.755539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.755565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.755655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.755682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.755822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.755847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.755983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.756036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.756148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.756174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.756286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.756321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.756408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.756434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.756550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.756576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.756663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.756689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.756833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.756860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.757007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.757057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.757143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.757169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.757273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.757318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.757421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.757447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.757559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.757584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.757703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.757754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.757845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.757869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.757981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.758006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.758105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.758144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.758242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.758269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.181 [2024-11-25 13:28:04.758383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.181 [2024-11-25 13:28:04.758410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.181 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.758527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.758553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.758649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.758681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.758771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.758798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.758916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.758944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.759056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.759081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.759206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.759246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.759343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.759371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.759470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.759498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.759616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.759642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.759786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.759839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.760023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.760073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.760170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.760196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.760322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.760348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.760439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.760464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.760577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.760601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.760736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.760788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.760927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.760953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.761041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.761067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.761176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.761216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.761336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.761366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.761460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.761488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.761600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.761626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.761741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.761768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.761911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.761936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.762049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.762075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.762181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.762221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.762344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.762372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.762490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.762517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.762611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.762642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.762783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.762809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.762896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.762921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.763061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.182 [2024-11-25 13:28:04.763109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.182 qpair failed and we were unable to recover it. 00:29:07.182 [2024-11-25 13:28:04.763238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.763278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.763390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.763430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.763554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.763583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.763680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.763707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.763803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.763830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.763927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.763954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.764067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.764093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.764206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.764232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.764336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.764363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.764452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.764478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.764626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.764652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.764790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.764816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.764927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.764953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.765065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.765091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.765206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.765233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.765389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.765428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.765549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.765575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.765689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.765715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.765803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.765828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.765922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.765947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.766055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.766094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.766299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.766334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.766421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.766449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.766593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.766620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.766715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.766740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.766825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.766851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.766937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.766962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.767111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.767137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.767249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.767290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.767398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.767427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.767522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.767549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.767635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.767662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.767756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.767782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.767957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.767983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.768095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.768122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.768211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.768242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.768340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.768369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.183 qpair failed and we were unable to recover it. 00:29:07.183 [2024-11-25 13:28:04.768497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.183 [2024-11-25 13:28:04.768524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.768606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.768633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.768747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.768774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.768865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.768891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.769005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.769032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.769123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.769149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.769259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.769286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.769381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.769408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.769502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.769529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.769639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.769666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.769806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.769832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.769949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.769975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.770092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.770118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.770221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.770250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.770360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.770387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.770502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.770528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.770637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.770662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.770749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.770775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.770881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.770907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.771016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.771042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.771128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.771154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.771246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.771272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.771395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.771423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.771515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.771542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.771651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.771678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.771791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.771818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.771928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.771959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.772069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.772096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.772184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.772212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.772330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.772357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.772474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.772501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.772580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.772606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.772719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.772746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.772828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.772856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.772970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.772997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.773110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.773149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.184 [2024-11-25 13:28:04.773295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.184 [2024-11-25 13:28:04.773334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.184 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.773420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.773563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.773589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.773704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.773729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.773867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.773895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.774007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.774035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.774130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.774156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.774267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.774293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.774414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.774441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.774550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.774576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.774711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.774737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.774852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.774880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.774987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.775014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.775107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.775134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.775252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.775278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.775388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.775427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.775527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.775553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.775654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.775683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.775827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.775854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.775940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.775967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.776049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.776075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.776218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.776244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.776354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.776394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.776515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.776543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.776685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.776712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.776889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.776930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.777099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.777150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.777274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.777301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.777394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.777421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.777506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.777532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.777620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.777650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.777840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.777892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.777976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.778002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.185 qpair failed and we were unable to recover it. 00:29:07.185 [2024-11-25 13:28:04.778141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.185 [2024-11-25 13:28:04.778167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.778289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.778323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.778423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.778461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.778567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.778596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.778687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.778714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.778864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.778892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.779003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.779030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.779143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.779169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.779249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.779278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.779573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.779612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.779761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.779877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.779902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.779987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.780012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.780131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.780156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.780260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.780300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.780422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.780450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.780568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.780597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.780710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.780738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.780917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.780966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.781082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.781108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.781241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.781281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.781403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.781442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.781557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.781596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.781724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.781751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.781868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.781896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.781986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.782011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.782106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.782134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.782261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.782289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.782388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.782418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.782510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.782537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.782655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.782682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.782801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.782828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.782939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.782966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.783050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.783077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.783198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.783224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.783340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.783367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.783488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.783515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.783662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.783694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.186 [2024-11-25 13:28:04.783826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.186 [2024-11-25 13:28:04.783852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.186 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.783941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.783967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.784078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.784104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.784194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.784220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.784314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.784341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.784440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.784468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.784588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.784615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.784730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.784756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.784875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.784925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.785072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.785122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.785234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.785260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.785381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.785408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.785531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.785557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.785681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.785707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.785793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.785821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.785956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.785994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.786127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.786167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.786324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.786353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.786468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.786495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.786592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.786619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.786704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.786730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.786812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.786838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.786947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.786974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.787060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.787087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.787194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.787220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.787313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.787340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.787442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.787472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.787589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.787617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.787712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.787740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.787857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.787883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.787965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.787991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.788087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.788113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.788257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.788284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.788405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.788432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.187 qpair failed and we were unable to recover it. 00:29:07.187 [2024-11-25 13:28:04.788529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.187 [2024-11-25 13:28:04.788567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.788688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.788716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.788905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.788931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.789047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.789072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.789192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.789218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.789359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.789385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.789501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.789529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.789645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.789672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.789753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.789780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.789892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.789919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.790046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.790073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.790166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.790192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.790318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.790345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.790434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.790461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.790601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.790628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.790738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.790764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.790881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.790908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.791014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.791040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.791140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.791179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.791311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.791339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.791482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.791509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.791605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.791632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.791847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.791873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.792012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.792038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.792144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.792170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.792317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.792344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.792433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.792458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.792572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.792597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.792677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.792702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.792816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.792842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.792983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.793008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.793089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.793115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.793211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.793255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.793385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.793413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.188 [2024-11-25 13:28:04.793504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.188 [2024-11-25 13:28:04.793530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.188 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.793673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.793699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.793781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.793808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.794002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.794030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.794144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.794171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.794267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.794296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.794400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.794426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.794516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.794542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.794691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.794744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.794900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.794939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.795165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.795219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.795379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.795418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.795522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.795549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.795693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.795744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.795836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.795861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.795972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.796000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.796192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.796233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.796347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.796376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.796464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.796490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.796609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.796657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.796779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.796820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.796917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.796943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.797040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.797068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.797189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.797225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.797361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.797389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.797482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.797515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.797600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.797627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.797743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.797770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.797902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.797929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.798070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.798097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.798192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.798223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.798348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.798387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.798509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.798536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.798655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.798682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.798817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.798842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.798959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.799006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.799117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.799143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.799259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.799285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.799387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.189 [2024-11-25 13:28:04.799413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.189 qpair failed and we were unable to recover it. 00:29:07.189 [2024-11-25 13:28:04.799508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.799547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.799666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.799693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.799776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.799803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.799885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.799911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.800065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.800095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.800190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.800229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.800323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.800350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.800430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.800456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.800570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.800596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.190 [2024-11-25 13:28:04.800733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.190 [2024-11-25 13:28:04.800759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.190 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.800870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.800895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.801011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.801037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.801117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.801142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.801259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.801292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.801447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.801474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.801585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.801611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.801723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.801749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.801872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.801898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.802015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.802041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.802153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.802180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.802279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.802322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.802409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.802436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.802549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.802575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.802658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.802685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.802819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.802845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.802970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.802996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.803113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.803139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.803275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.803324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.803416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.803443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.803542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.803571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-11-25 13:28:04.803661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-11-25 13:28:04.803686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.803773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.803800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.803888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.803917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.804029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.804056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.804147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.804174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.804266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.804294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.804444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.804471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.804596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.804624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.804744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.804770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.804862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.804890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.804986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.805013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.805104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.805130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.805244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.805271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.805370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.805397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.805514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.805541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.805622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.805649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.805801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.805829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.805924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.805951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.806040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.806068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.806193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.806233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.806339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.806369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.806470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.806497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.806615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.806641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.806735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.806766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.806861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.806887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.806979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.807005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.807152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.807178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.807291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.807322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.807407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.807433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.807519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.807546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.807655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.807681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.807786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.807815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.807915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.807943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.808050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.808077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.808168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.808195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.808322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.808361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.808459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.808486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.808583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.808609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-11-25 13:28:04.808695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-11-25 13:28:04.808721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.808834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.808860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.808952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.808978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.809092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.809118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.809210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.809236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.809324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.809351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.809440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.809465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.809550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.809575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.809660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.809688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.809771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.809797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.809892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.809931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.810965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.810990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.811101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.811125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.811220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.811247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.811370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.811398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.811485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.811510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.811707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.811733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.811819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.811845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.811964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.811990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.812103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.812129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.812264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.812291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.812394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.812432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.812554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.812583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.812688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.812715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.812826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.812856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.812946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.812973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.813066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.813094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.813183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.813209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.813328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.813359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.813461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.813488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.813601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-11-25 13:28:04.813628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-11-25 13:28:04.813751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.813779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.813863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.813889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.814027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.814054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.814144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.814171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.814253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.814281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.814407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.814435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.814549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.814575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.814694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.814719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.814809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.814835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.814923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.814949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.815040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.815066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.815202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.815227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.815329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.815368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.815466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.815499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.815593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.815619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.815730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.815756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.815848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.815876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.815970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.815997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.816084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.816111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.816229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.816258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.816359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.816406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.816496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.816523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.816610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.816636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.816727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.816753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.816877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.816905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.817021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.817048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.817143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.817169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.817265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.817292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.817423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.817451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.817548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.817574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.817658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.817684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.817798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.817825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.817905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.817931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.818023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.818051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.818157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.818196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.818288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.818324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.818413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.818440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.818535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.818561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-11-25 13:28:04.818674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-11-25 13:28:04.818701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.818816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.818842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.818949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.818980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.819111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.819140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.819261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.819287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.819390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.819417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.819505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.819532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.819648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.819675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.819789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.819818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.819912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.819940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.820083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.820112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.820200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.820226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.820315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.820343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.820434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.820461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.820578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.820604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.820689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.820715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.820842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.820868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.820954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.820981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.821102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.821131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.821225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.821253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.821351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.821379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.821462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.821488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.821571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.821597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.821681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.821707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.821825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.821851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.821937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.821962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.822048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.822077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.822172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.822201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.822290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.822323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.822413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.822440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.822576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.822602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.822740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.822766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.822878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.822903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.822983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.823009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-11-25 13:28:04.823104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-11-25 13:28:04.823132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.823250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.823276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.823401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.823428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.823568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.823594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.823684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.823711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.823798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.823825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.823931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.823957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.824061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.824102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.824218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.824246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.824352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.824379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.824465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.824491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.824614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.824640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.824723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.824748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.824868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.824895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.824983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.825009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.825129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.825155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.825280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.825314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.825404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.825430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.825547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.825574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.825657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.825683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.825825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.825851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.825937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.825963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.826083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.826110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.826235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.826274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.826372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.826413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.826508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.826535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.826621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.826647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.826754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.826779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.826895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.826920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.827007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.827032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.827119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.827145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.827254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.827279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.827365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.827393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.827477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.827504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-11-25 13:28:04.827599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-11-25 13:28:04.827627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.827717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.827747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.827860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.827886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.827995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.828021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.828144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.828170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.828253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.828279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.828386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.828426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.828551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.828578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.828775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.828800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.828893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.828918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.828997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.829111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.829218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.829331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.829435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.829556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.829665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.829810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.829923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.829949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.830030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.830055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.830171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.830197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.830315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.830345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.830435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.830462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.830577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.830603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.830683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.830709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.830817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.830843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.830956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.830982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.831097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.831123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.831235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.831265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.831387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.831413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.831503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.831530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.831620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.831645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.831764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.831790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.831977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.832003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.832098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.832123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.832228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.832253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.832339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.832366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.832493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.832533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-11-25 13:28:04.832653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-11-25 13:28:04.832681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.832795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.832822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.832911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.832940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.833026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.833053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.833177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.833204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.833314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.833342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.833435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.833461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.833556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.833584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.833671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.833698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.833804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.833831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.833950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.833976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.834095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.834122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.834218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.834256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.834352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.834381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.834500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.834526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.834645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.834671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.834761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.834787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.834875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.834909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.835002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.835030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.835116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.835142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.835253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.835278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.835403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.835429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.835622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.835648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.835762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.835788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.835944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.835973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.836090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.836117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.836231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.836259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.836391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.836418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.836502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.836528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.836626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.836652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.836765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.836791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.836937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.836964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.837054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.837082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.837210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.837238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.837438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.837465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.837577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.837603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.837721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.837746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.837934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.837960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.838077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-11-25 13:28:04.838104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-11-25 13:28:04.838226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.838253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.838369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.838395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.838538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.838564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.838697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.838724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.838865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.838891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.839012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.839038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.839152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.839177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.839313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.839354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.839450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.839480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.839600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.839626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.839743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.839769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.839857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.839883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.839996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.840023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.840128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.840168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.840290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.840325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.840444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.840470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.840594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.840620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.840735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.840762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.840874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.840904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.841001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.841029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.841153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.841179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.841273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.841301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.841457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.841484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.841579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.841608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.841720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.841746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.841861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.841887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.841975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.842001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.842105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.842145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.842268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.842297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.842388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.842413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.842528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.842554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.842650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.842675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.842767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.842793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.842886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.842911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.843031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.843056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.843188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.843228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.843340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.843368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.843498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.843537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-11-25 13:28:04.843684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-11-25 13:28:04.843712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.843832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.843861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.843953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.843980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.844068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.844095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.844210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.844237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.844394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.844433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.844554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.844581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.844698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.844730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.844869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.844896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.845013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.845039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.845132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.845158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.845252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.845280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.845418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.845457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.845564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.845603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.845724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.845751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.845867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.845893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.846003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.846030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.846141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.846167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.846251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.846277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.846400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.846427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.846509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.846536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.846660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.846687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.846798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.846824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.846943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.846970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.847054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.847082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.847199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.847226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.847341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.847368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.847492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.847519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.847614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.847640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.847760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.847787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.847928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.847954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.848048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.848075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.848216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.848242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-11-25 13:28:04.848329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-11-25 13:28:04.848356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.848486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.848524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.848624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.848653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.848746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.848773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.848882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.848908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.849019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.849045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.849162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.849187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.849295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.849327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.849443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.849469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.849550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.849577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.849693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.849721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.849836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.849862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.849953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.849979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.850098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.850125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.850220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.850251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.850343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.850371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.850460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.850496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.850645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.850683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.850775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.850803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.850888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.850915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.851007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.851032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.851146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.851172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.851261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.851287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.851407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.851433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.851559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.851584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.851671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.851698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.851780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.851807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.851898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.851924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.852049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.852078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.852200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.852231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.852339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.852379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.852502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.852530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.852654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.852682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.852775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.852801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.852917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.852943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.853062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.853091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.853179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.853208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.853346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.853374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.853461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-11-25 13:28:04.853488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-11-25 13:28:04.853572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.853598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.853709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.853735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.853822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.853851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.853966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.853992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.854107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.854136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.854221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.854248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.854331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.854367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.854460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.854487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.854598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.854633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.854779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.854806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.854922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.854949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.855075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.855101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.855225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.855252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.855360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.855400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.855550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.855578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.855690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.855716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.855812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.855838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.855945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.855970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.856102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.856130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.856235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.856275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.856375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.856405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.856506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.856534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.856642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.856669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.856781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.856815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.856925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.856952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.857039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.857065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.857175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.857202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.857294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.857330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.857448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.857474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.857574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.857604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.857699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.857725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.857820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.857845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.857955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.857980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.858067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.858095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.858204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.858230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.858326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.858352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.858437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.858463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.858554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.858582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-11-25 13:28:04.858694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-11-25 13:28:04.858720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.858839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.858865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.858946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.858972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.859065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.859091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.859204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.859232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.859319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.859345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.859438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.859464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.859552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.859578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.859723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.859749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.859828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.859854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.859980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.860008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.860149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.860176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.860328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.860367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.860486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.860514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.860599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.860626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.860735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.860761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.860873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.860898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.860987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.861013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.861136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.861162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.861245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.861271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.861402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.861430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.861540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.861580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.861702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.861731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.861859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.861886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.861998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.862025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.862145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.862171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.862258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.862284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.862411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.862439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.862560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.862588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.862669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.862696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.862803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.862830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.862952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.862988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.863108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.863134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.863277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.863309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.863404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.863432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.863517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.863545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.863651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.863678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.863818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.863845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-11-25 13:28:04.863985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-11-25 13:28:04.864011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.864135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.864175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.864311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.864339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.864432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.864459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.864577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.864604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.864695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.864721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.864806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.864833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.864924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.864952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.865040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.865066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.865159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.865186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.865299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.865331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.865451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.865478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.865591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.865619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.865742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.865770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.865882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.865909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.866000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.866026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.866110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.866137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.866258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.866285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.866406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.866432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.866526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.866554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.866674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.866701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.866796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.866822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.866938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.866964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.867051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.867077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.867187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.867214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.867335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.867364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.867450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.867477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.867563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.867589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.867668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.867694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.867809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.867837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.867922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.867949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.868058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.868086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.868187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.868227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.868384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.868418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.868508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.868536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.868675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.868701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.868817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.868842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.868988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.869014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.869102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-11-25 13:28:04.869130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-11-25 13:28:04.869273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.869300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.869395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.869421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.869506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.869533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.869639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.869665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.869750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.869778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.869871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.869899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.870026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.870052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.870135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.870161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.870257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.870283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.870407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.870434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.870521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.870548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.870640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.870666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.870781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.870806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.870919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.870945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.871064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.871092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.871188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.871228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.871334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.871364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.871450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.871478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.871594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.871622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.871710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.871738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.871820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.871847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.871937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.871965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.872062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.872102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.872197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.872225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.872338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.872364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.872448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.872475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.872557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.872584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.872691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.872718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.872829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.872855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.872932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.872958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.873071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.873097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.873210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.873236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.873374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.873413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.873512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-11-25 13:28:04.873541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-11-25 13:28:04.873634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.873666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.873785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.873812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.873922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.873948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.874030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.874058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.874148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.874175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.874269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.874395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.874513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.874551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.874706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.874734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.874854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.874883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.874975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.875002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.875133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.875159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.875275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.875308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.875436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.875462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.875582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.875608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.875706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.875733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.875844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.875871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.875956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.875982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.876103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.876129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.876224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.876264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.876396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.876435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.876557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.876584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.876663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.876689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.876769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.876796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.876911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.876999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.877024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.877166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.877192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.877290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.877342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.877437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.877466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.877607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.877634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.877743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.877770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.877885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.877911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.877990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.878016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.878128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.878154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.878267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.878294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.878416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.878443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.878534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.878560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.878669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.878696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.878811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-11-25 13:28:04.878837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-11-25 13:28:04.878959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.878985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.879100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.879126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.879225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.879270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.879377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.879404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.879491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.879517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.879606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.879632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.879712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.879738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.879837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.879864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.879947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.879973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.880091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.880117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.880235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.880261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.880353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.880380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.880470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.880495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.880619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.880646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.880768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.880795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.880890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.880919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.881011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.881038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.881124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.881150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.881243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.881269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.881387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.881414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.881497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.881523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.881601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.881628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.881737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.881764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.881877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.881905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.882044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.882070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.882152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.882179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.882292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.882325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.882413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.882438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.882548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.882575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.882662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.882694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.882777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.882804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.882905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.882944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.883065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.883092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.883213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.883239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.883351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.883377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.883528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.883553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.883665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.883691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.883808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.883833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-11-25 13:28:04.883937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-11-25 13:28:04.883963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.884083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.884108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.884225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.884251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.884352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.884378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.884466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.884492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.884618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.884645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.884757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.884782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.884881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.884907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.885043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.885068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.885183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.885210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.885323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.885350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.885466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.885493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.885636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.885661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.885801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.885827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.885944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.885971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.886110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.886136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.886249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.886274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.886382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.886422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.886527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.886555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.886671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.886697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.886789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.886815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.886932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.886958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.887049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.887075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.887166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.887193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.887343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.887383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.887484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.887513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.887598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.887625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.887735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.887761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.887851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.887877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.887967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.887995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.888107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.888134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.888245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.888276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.888371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.888399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.888481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.888508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.888601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.888627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.888740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.888766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.888884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.888910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.889027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.889053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.889135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-11-25 13:28:04.889161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-11-25 13:28:04.889258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.889284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.889386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.889413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.889499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.889525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.889674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.889702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.889780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.889806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.889898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.889924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.890048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.890076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.890159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.890185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.890296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.890329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.890422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.890448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.890569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.890598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.890707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.890734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.890849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.890875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.890965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.890992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.891106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.891132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.891276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.891309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.891421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.891447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.891541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.891567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.891659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.891687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.891845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.891872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.891988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.892014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.892134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.892161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.892277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.892309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.892421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.892447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.892534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.892561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.892674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.892700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.892811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.892837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.892924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.892950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.893066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.893092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.893184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.893211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.893349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.893376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.893500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.893526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.893640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-11-25 13:28:04.893671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-11-25 13:28:04.893757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.893783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.893868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.893895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.893973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.893999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.894106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.894132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.894216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.894242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.894376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.894416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.894531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.894559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.894712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.894738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.894861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.894889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.895009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.895035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.895124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.895152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.895267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.895294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.895418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.895444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.895534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.895561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.895653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.895680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.895774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.895802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.895893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.895920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.896014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.896041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.896129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.896156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.896233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.896259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.896347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.896374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.896485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.896511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.896625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.896651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.896776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.896803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.896884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.896911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.897021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.897047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.897133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.897160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.897249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.897275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.897377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.897404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.897546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.897573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.897692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.897718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.897832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.897858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.897961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.897987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.898132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.898161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.898282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.898315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.898432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.898459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.898550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.898576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.898695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.898722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-11-25 13:28:04.898809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-11-25 13:28:04.898836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.899034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.899066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.899177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.899203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.899394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.899421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.899558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.899584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.899663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.899689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.899809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.899835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.899920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.899949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.900051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.900078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.900154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.900181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.900277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.900309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.900395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.900422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.900517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.900544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.900624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.900651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.900781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.900807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.901012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.901039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.901122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.901147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.901245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.901271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.901432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.901458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.901576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.901602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.901695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.901721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.901837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.901864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.901980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.902007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.902160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.902189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.902317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.902344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.902471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.902497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.902591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.902617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.902750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.902776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.902892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.902919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.903010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.903038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.903122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.903149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.903291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.903324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.903443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.903469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.903587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.903614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.903754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.903779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.903865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.903892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.904009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.904036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.904154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.904180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.904268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-11-25 13:28:04.904295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-11-25 13:28:04.904399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.904425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.904548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.904574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.904692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.904726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.904848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.904874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.905064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.905090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.905205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.905233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.905321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.905349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.905439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.905465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.905547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.905573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.905684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.905710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.905824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.905849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.905989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.906015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.906102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.906129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.906240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.906266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.906363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.906390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.906475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.906501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.906597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.906623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.906707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.906733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.906844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.906869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.906995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.907021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.907161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.907187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.907310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.907342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.907435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.907462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.907547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.907573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.907676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.907701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.907848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.907874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.907981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.908008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.908109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.908135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.908247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.908273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.908400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.908426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.908544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.908570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.908653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.908679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.908820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.908845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.908966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.908992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.909100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.909126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.909222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.909248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.909336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.909363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.909505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-11-25 13:28:04.909531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-11-25 13:28:04.909668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.909694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.909788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.909815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.909897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.909924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.910044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.910070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.910156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.910187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.910275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.910308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.910415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.910441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.910529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.910556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.910650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.910678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.910799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.910825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.910942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.910968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.911077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.911104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.911232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.911258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.911361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.911388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.911515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.911542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.911653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.911679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.911790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.911816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.911942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.911968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.912111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.912137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.912263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.912289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.912388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.912414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.912523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.912549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.912661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.912687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.912804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.912830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.912913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.912940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.913030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.913058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.913202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.913228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.913332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.913358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.913467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.913493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.913610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.913637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.913750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.913777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.913920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.913946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.914058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.914084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.914200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.914226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.914337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.914364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.914477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.914503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.914616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.914642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.914781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.914807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-11-25 13:28:04.914901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-11-25 13:28:04.914927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.915009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.915035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.915176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.915202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.915319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.915345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.915427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.915453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.915570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.915596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.915718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.915748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.915839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.915865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.915979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.916005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.916122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.916148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.916236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.916261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.916387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.916414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.916537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.916563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.916704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.916730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.916817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.916843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.916985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.917011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.917119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.917145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.917264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.917290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.917390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.917416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.917498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.917525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.917649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.917675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.917825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.917850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.917961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.917987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.918096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.918122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.918272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.918298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.918419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.918445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.918532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.918558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.918651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.918677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.918765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.918791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.918872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.918898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.919016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.919041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.919150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.919177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.919259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.919285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.919386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.919412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.919510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-11-25 13:28:04.919536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-11-25 13:28:04.919622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.919647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.919792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.919818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.919936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.919963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.920104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.920130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.920218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.920245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.920389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.920416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.920526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.920574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.920716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.920764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.920903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.920929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.921045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.921071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.921190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.921216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.921326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.921356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.921534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.921671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.921698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.921816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.921842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.921952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.921979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.922168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.922195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.922334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.922361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.922476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.922503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.922627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.922654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.922768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.922794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.922878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.922904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.923018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.923045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.923146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.923172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.923262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.923288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.923396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.923424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.923510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.923536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.923635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.923662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.923776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.923804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.923911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.923937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.924051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.924078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.924219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.924246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.924367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.924393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.924509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.924536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.924636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.924662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.924750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.924776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.924870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.924897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.924993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-11-25 13:28:04.925019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-11-25 13:28:04.925139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c7f30 is same with the state(6) to be set 00:29:07.527 [2024-11-25 13:28:04.925326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.925366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.925493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.925521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.925606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.925633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.925717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.925742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.925857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.925884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.925971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.925996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.926099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.926128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.926204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.926231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.926356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.926408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.926544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.926592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.926734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.926867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.926894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.926973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.927000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.927148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.927176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.927271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.927297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.927388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.927415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.927530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.927557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.927675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.927703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.927794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.927822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.927937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.927963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.928052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.928078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.928187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.928226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.928354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.928381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.928494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.928520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.928634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.928660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.928777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.928802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.928910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.928954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.929076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.929102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.929227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.929254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.929341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.929367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.929480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.929524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.929631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.929657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.929746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.929774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.929862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.929891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.930023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.930073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.930193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.930219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.930338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.930365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.930456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-11-25 13:28:04.930483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-11-25 13:28:04.930573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.930600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.930689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.930716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.930834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.930861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.930948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.930975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.931093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.931119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.931284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.931410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.931437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.931527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.931553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.931641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.931668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.931776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.931803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.931890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.931918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.932059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.932085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.932186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.932213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.932328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.932358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.932501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.932526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.932647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.932673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.932790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.932817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.932911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.932936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.933038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.933163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.933282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.933399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.933519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.933664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.933779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.933890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.933977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.934004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.934087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.934112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.934228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.934261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.934386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.934413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.934499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.934525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.934667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.934703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.934845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.934880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.935000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.935034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.935179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.935207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.935325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.935364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.935485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.935532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.935709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.935758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.528 qpair failed and we were unable to recover it. 00:29:07.528 [2024-11-25 13:28:04.935878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.528 [2024-11-25 13:28:04.935926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.936039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.936066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.936202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.936229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.936373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.936400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.936547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.936572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.936708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.936744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.936915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.936950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.937088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.937122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.937270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.937297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.937422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.937449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.937545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.937573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.937696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.937746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.937853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.937890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.938021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.938047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.938135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.938164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.938264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.938290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.938395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.938422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.938517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.938545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.938659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.938685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.938803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.938830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.938913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.938939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.939026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.939053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.939134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.939160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.939259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.939286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.939380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.939407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.939520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.939546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.939632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.939657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.939767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.939793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.939906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.939930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.940023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.940052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.940169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.940201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.940315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.940345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.940491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.940518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.940636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.529 [2024-11-25 13:28:04.940664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.529 qpair failed and we were unable to recover it. 00:29:07.529 [2024-11-25 13:28:04.940773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.940800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.940917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.940945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.941041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.941067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.941161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.941187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.941294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.941325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.941470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.941497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.941581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.941731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.941757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.941849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.941875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.941983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.942009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.942162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.942190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.942316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.942344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.942483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.942532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.942642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.942691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.942861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.942910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.943000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.943027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.943111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.943138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.943256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.943281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.943392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.943419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.943509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.943536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.943653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.943688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.943794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.943828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.943969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.944002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.944145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.944181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.944300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.944356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.944495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.944528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.944717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.944744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.944828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.944854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.944978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.945006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.945099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.945125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.945325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.945353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.945547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.945594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.945726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.945775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.945947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.945996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.946108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.946135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.946250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.946277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.946428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.946485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.946625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.946673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.530 [2024-11-25 13:28:04.946817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.530 [2024-11-25 13:28:04.946861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.530 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.946944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.946971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.947052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.947079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.947173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.947199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.947316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.947343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.947424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.947452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.947566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.947593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.947685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.947712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.947794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.947821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.947918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.947943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.948030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.948057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.948173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.948202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.948330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.948356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.948439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.948465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.948582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.948608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.948721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.948747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.948866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.948893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.948981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.949010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.949225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.949252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.949384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.949433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.949565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.949601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.949758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.949807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.949946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.949995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.950187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.950214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.950335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.950362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.950452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.950478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.950618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.950644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.950760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.950787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.950864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.950891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.951004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.951032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.951122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.951148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.951267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.951293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.951451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.951477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.951591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.951618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.951752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.951787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.951942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.951979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.952107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.952134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.952228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.952259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.952432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.531 [2024-11-25 13:28:04.952486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.531 qpair failed and we were unable to recover it. 00:29:07.531 [2024-11-25 13:28:04.952615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.952664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.952793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.952841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.952954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.952990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.953108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.953142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.953244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.953279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.953399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.953426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.953535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.953583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.953726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.953774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.953875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.953909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.954070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.954097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.954187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.954213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.954301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.954335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.954423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.954450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.954543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.954570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.954656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.954682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.954822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.954848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.954930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.954956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.955067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.955092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.955186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.955212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.955295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.955329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.955488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.955522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.955656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.955690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.955810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.955845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.955966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.955992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.956128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.956154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.956284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.956336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.956447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.956484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.956617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.956672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.956787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.956815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.956936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.956963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.957051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.957080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.957164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.957190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.957274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.957300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.957420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.957446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.957565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.957592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.957683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.957710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.957791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.957818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.957914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.957940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.532 qpair failed and we were unable to recover it. 00:29:07.532 [2024-11-25 13:28:04.958076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.532 [2024-11-25 13:28:04.958105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.958202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.958234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.958327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.958354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.958446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.958473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.958589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.958616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.958708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.958734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.958847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.958874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.958959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.959006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.959152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.959185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.959328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.959373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.959510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.959544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.959685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.959719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.959828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.959861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.960028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.960062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.960233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.960267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.960405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.960434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.960573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.960621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.960724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.960775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.960884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.960932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.961025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.961051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.961164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.961191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.961337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.961364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.961480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.961507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.961595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.961622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.961740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.961769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.961881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.961907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.962022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.962048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.962157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.962184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.962300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.962331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.962424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.962450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.962619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.962671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.962808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.962858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.962960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.963012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.963154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.963180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.963262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.963289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.963407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.963455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.963584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.963619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.963758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.963791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.533 qpair failed and we were unable to recover it. 00:29:07.533 [2024-11-25 13:28:04.963926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.533 [2024-11-25 13:28:04.963960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.964103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.964129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.964212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.964237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.964352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.964384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.964488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.964523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.964622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.964655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.964763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.964797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.964934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.964981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.965094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.965128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.965276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.965317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.965455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.965500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.965638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.965665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.965785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.965811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.965959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.965992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.966132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.966166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.966316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.966364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.966500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.966534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.966721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.966755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.966877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.966912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.967087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.967136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.967257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.967284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.967433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.967482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.967598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.967649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.967783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.967833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.967975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.968002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.968122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.968149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.968263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.968289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.968431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.968457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.968589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.968622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.968760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.968794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.968941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.968981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.969131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.969165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.969316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.969365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.969451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.969478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.534 [2024-11-25 13:28:04.969644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.534 [2024-11-25 13:28:04.969678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.534 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.969812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.969845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.969950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.969994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.970176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.970209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.970386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.970413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.970518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.970544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.970672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.970698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.970804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.970836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.970920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.970946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.971032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.971058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.971157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.971186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.971321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.971376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.971539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.971584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.971724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.971759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.971873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.971907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.972030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.972063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.972205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.972239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.972389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.972416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.972551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.972589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.972690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.972724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.972837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.972870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.973009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.973042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.973181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.973214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.973401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.973428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.973556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.973582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.973667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.973694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.973867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.973911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.974050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.974076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.974207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.974241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.974359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.974387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.974530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.974556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.974665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.974714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.974856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.974891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.974998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.975048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.975189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.975222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.975374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.975401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.975493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.975523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.975614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.975641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.535 [2024-11-25 13:28:04.975727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.535 [2024-11-25 13:28:04.975754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.535 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.975858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.975884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.976024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.976050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.976204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.976238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.976435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.976475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.976591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.976618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.976767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.976816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.977023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.977071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.977177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.977204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.977395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.977422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.977514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.977542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.977667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.977693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.977815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.977841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.977953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.977979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.978099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.978125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.978210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.978236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.978322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.978350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.978462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.978487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.978610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.978636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.978801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.978835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.978939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.978972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.979108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.979141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.979258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.979284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.979424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.979450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.979570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.979596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.979738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.979773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.979886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.979919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.980063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.980096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.980206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.980232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.980343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.980370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.980453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.980480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.980595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.980624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.980733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.980781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.980928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.980962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.981112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.981157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.981298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.981336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.981425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.981452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.981596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.981622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.981703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.536 [2024-11-25 13:28:04.981734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.536 qpair failed and we were unable to recover it. 00:29:07.536 [2024-11-25 13:28:04.981855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.981882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.982072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.982106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.982215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.982250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.982388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.982416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.982564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.982597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.982749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.982784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.982932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.982967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.983208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.983257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.983356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.983383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.983472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.983500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.983622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.983670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.983797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.983845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.983982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.984030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.984129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.984156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.984299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.984337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.984451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.984477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.984570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.984597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.984708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.984734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.984816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.984841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.984958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.984984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.985061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.985087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.985198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.985224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.985313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.985357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.985508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.985544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.985711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.985745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.985907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.985959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.986056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.986084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.986180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.986207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.986295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.986332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.986464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.986512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.986631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.986658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.986744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.986771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.986869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.986896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.987021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.987048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.987160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.987188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.987326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.987353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.987439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.987465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.987579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.987605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.537 qpair failed and we were unable to recover it. 00:29:07.537 [2024-11-25 13:28:04.987717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.537 [2024-11-25 13:28:04.987744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.987856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.987886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.987974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.988000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.988089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.988116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.988211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.988237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.988324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.988351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.988439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.988466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.988549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.988575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.988713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.988740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.988877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.988903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.988989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.989015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.989125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.989153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.989267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.989293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.989409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.989443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.989556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.989591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.989745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.989779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.989950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.989984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.990099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.990135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.990267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.990308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.990425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.990451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.990567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.990602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.990772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.990806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.990920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.990954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.991123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.991169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.991283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.991318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.991434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.991462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.991592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.991638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.991779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.991828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.991963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.991989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.992079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.992105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.992220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.992246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.992341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.992369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.992452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.992479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.992570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.992596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.538 qpair failed and we were unable to recover it. 00:29:07.538 [2024-11-25 13:28:04.992676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.538 [2024-11-25 13:28:04.992702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.992826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.992852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.992958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.992984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.993091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.993117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.993240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.993267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.993369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.993397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.993533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.993580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.993715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.993768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.993871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.993919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.994026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.994052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.994165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.994192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.994280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.994312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.994452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.994478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.994595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.994621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.994736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.994762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.994856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.994882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.994986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.995021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.995197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.995230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.995407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.995433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.995514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.995541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.995654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.995680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.995828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.995874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.995990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.996036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.996177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.996204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.996293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.996327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.996418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.996445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.996534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.996561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.996669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.996694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.996783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.996809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.996901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.996927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.997012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.997039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.997152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.997178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.997320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.997347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.997484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.997510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.997631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.997678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.997831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.997865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.998004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.998040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.998208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.998234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.539 [2024-11-25 13:28:04.998379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.539 [2024-11-25 13:28:04.998415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.539 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.998560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.998593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.998781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.998831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.998964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.999000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.999110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.999136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.999216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.999243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.999387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.999436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.999573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.999619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.999736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.999782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:04.999906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:04.999936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.000017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.000044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.000162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.000189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.000270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.000296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.000423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.000451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.000538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.000566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.000682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.000708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.000791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.000818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.000925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.000973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.001097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.001124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.001243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.001270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.001373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.001402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.001515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.001542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.001654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.001680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.001848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.001883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.001997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.002031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.002180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.002213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.002333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.002387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.002543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.002583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.002722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.002756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.002932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.002965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.003106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.003140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.003272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.003314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.003493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.003519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.003650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.003685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.003822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.003856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.003992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.004025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.004126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.004161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.004312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.004360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.004446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.540 [2024-11-25 13:28:05.004474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.540 qpair failed and we were unable to recover it. 00:29:07.540 [2024-11-25 13:28:05.004618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.004643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.004797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.004831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.004941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.004967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.005124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.005158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.005281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.005330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.005433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.005460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.005574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.005599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.005717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.005750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.005889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.005923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.006067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.006102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.006244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.006284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.006441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.006467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.006587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.006621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.006738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.006783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.006894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.006921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.007032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.007067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.007209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.007236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.007359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.007386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.007527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.007553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.007721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.007757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.007914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.007949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.008090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.008136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.008232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.008258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.008430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.008470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.008606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.008635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.008785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.008821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.008980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.009030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.009228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.009254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.009380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.009408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.009586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.009622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.009777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.009821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.009964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.009991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.010181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.010207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.010378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.010427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.010540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.010596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.010710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.010736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.010829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.010856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.010980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.541 [2024-11-25 13:28:05.011008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-11-25 13:28:05.011137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.011177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.011273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.011310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.011417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.011444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.011557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.011583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.011701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.011728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.011869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.011895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.012000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.012035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.012173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.012207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.012332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.012378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.012542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.012568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.012661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.012688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.012809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.012835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.013007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.013065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.013210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.013236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.013325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.013363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.013506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.013555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.013693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.013739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.013897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.013945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.014056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.014084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.014209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.014234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.014326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.014354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.014492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.014526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.014631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.014658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.014793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.014841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.014981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.015008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.015154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.015180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.015277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.015316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.015470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.015504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.015621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.015647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.015764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.015791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.015881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.015907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.016027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.016053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.016148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.016174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.016287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.016322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.016416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.016441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.016522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.016548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.016665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.016691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.016840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.542 [2024-11-25 13:28:05.016866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-11-25 13:28:05.016948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.016974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.017107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.017146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.017267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.017296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.017463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.017497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.017666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.017700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.017814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.017850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.017973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.018008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.018153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.018182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.018310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.018337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.018422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.018448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.018523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.018549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.018643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.018669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.018811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.018855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.018951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.018980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.019072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.019104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.019198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.019225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.019314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.019341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.019421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.019446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.019524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.019549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.019636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.019663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.019777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.019803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.019915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.019941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.020098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.020132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.020278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.020322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.020422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.020448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.020591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.020625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.020810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.020835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.020953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.020980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.021102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.021130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.021217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.021244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.021364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.021392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.021537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.021588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.021716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.021763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-11-25 13:28:05.021878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.543 [2024-11-25 13:28:05.021924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.022013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.022041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.022131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.022157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.022251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.022278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.022425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.022453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.022565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.022591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.022676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.022702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.022791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.022817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.022933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.022961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.023058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.023084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.023205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.023234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.023328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.023356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.023466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.023516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.023651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.023700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.023839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.023886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.024008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.024054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.024181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.024208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.024325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.024353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.024440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.024468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.024588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.024622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.024737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.024772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.024892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.024933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.025101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.025134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.025240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.025287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.025414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.025441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.025576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.025610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.025725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.025760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.025875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.025909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.026042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.026076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.026179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.026214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.026391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.026419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.026531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.026558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.026639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.026666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.026756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.026782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.026875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.026900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.027011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.027047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.027181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.027207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.027295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.027327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.027434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.027461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.027579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.027605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.027688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.027734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.027882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.544 [2024-11-25 13:28:05.027919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-11-25 13:28:05.028057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.028091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.028232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.028259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.028383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.028410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.028504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.028530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.028613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.028640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.028756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.028782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.028928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.028973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.029102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.029128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.029242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.029268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.029388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.029415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.029529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.029555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.029643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.029670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.029791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.029817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.029909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.029935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.030027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.030054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.030195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.030229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.030349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.030376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.030458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.030485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.030566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.030590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.030698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.030739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.030882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.030917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.031054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.031088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.031225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.031250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.031347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.031373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.031512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.031538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.031649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.031675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.031752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.031779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.031872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.031899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.031988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.032031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.032167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.032200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.032390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.032417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.032496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.032522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.032641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.032667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.032772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.032798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.032887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.032914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.033006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.033057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.033187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.033221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.033347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.033373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.033490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.033516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.033642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.033675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.033872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.033906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.545 [2024-11-25 13:28:05.034009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.545 [2024-11-25 13:28:05.034043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.545 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.034192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.034233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.034377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.034404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.034514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.034541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.034680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.034715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.034841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.034888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.035064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.035098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.035217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.035284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.035443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.035468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.035580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.035607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.035766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.035800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.036050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.036076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.036189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.036215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.036300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.036334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.036422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.036449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.036533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.036558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.036702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.036736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.036875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.036909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.037075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.037114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.037228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.037261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.037410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.037437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.037574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.037623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.037769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.037811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.037920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.037966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.038088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.038131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.038240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.038285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.038390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.038415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.038531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.038557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.038724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.038758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.038863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.038894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.039043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.039078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.039223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.039257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.039443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.039476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.039617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.039650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.039790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.039823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.039958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.039992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.040163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.040209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.040372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.040433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.040564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.040622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.040793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.040827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.040959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.040991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.041100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.546 [2024-11-25 13:28:05.041133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.546 qpair failed and we were unable to recover it. 00:29:07.546 [2024-11-25 13:28:05.041271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.041311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.041450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.041483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.041646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.041679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.041823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.041860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.041971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.042005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.042114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.042147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.042290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.042497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.042531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.042670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.042705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.042811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.042847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.043028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.043062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.043175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.043209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.043322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.043357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.043487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.043521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.043628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.043661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.043830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.043862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.044009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.044047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.044193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.044226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.044366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.044401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.044579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.044612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.044754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.044787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.044953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.044986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.045087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.045120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.045258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.045292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.045445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.045477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.045622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.045654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.045772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.045806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.045953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.045986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.046157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.046189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.046327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.046362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.046481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.046506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.046601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.046626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.046766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.046800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.046914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.046946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.047109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.047157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.047291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.047334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.047480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.047513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.047662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.547 [2024-11-25 13:28:05.047696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.547 qpair failed and we were unable to recover it. 00:29:07.547 [2024-11-25 13:28:05.047807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.047840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.047981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.048014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.048159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.048192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.048345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.048379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.048496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.048529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.048639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.048680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.048852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.048887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.049033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.049065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.049199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.049232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.049343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.049379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.049497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.049530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.049638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.049672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.049788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.049820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.049951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.049983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.050112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.050146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.050277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.050326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.050448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.050481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.050601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.050634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.050801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.050834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.050986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.051020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.051190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.051224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.051342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.051375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.051518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.051550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.051694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.548 [2024-11-25 13:28:05.051728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.548 qpair failed and we were unable to recover it. 00:29:07.548 [2024-11-25 13:28:05.051877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.051909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.052048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.052081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.052225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.052259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.052379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.052412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.052558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.052590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.052741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.052777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.052923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.052957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.053124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.053170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.053314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.053352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.053460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.053493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.053637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.053671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.053846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.053880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.053990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.054023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.054161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.054196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.054339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.054375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.054492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.054526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.054700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.054752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.054907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.054942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.055089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.055126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.055266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.055301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.055412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.055446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.055569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.055610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.055750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.055784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.055951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.055984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.056097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.056132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.056278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.056322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.056486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.056520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.056670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.056713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.056806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.056833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.056989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.057023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.057169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.057202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.057345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.057381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.057525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.057569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.057653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.057679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.057771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.057798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.549 [2024-11-25 13:28:05.057897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.549 [2024-11-25 13:28:05.057946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.549 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.058088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.058132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.058246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.058271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.058425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.058459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.058571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.058605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.058754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.058788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.058925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.058959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.059095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.059128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.059243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.059276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.059386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.059420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.059563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.059599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.059753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.059787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.059947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.059972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.060070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.060097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.060240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.060266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.060409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.060443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.060577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.060610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.060777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.060811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.060952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.060985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.061103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.061136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.061273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.061316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.061465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.061499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.061612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.061645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.061788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.061823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.061961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.061995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.062141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.062175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.062358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.062392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.062510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.062536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.062667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.062702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.062850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.062884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.063000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.063034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.063174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.063209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.063372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.063400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.063517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.063544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.063651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.063676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.550 [2024-11-25 13:28:05.063805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.550 [2024-11-25 13:28:05.063831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.550 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.064011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.064037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.064127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.064153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.064243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.064290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.064446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.064480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.064656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.064690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.064808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.064842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.064982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.065016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.065136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.065169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.065299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.065340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.065457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.065489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.065605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.065639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.065826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.065853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.065949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.065974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.066129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.066163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.066299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.066340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.066488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.066520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.066663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.066697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.066814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.066847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.066964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.066997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.067152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.067187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.067335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.067371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.067551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.067587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.067733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.067767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.067913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.067949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.068124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.068159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.068312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.068347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.068462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.068496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.068613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.068649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.068796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.068832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.068977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.069013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.069185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.069229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.069444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.069514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.069711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.069778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.069944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.069981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.070137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.551 [2024-11-25 13:28:05.070171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.551 qpair failed and we were unable to recover it. 00:29:07.551 [2024-11-25 13:28:05.070323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.070359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.070500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.070536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.070679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.070715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.070861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.070896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.071046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.071080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.071200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.071236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.071393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.071428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.071603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.071637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.071785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.071820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.071988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.072014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.072152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.072177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.072317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.072354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.072501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.072537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.072677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.072713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.072889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.072931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.073048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.073074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.073156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.073207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.073404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.073449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.073587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.073613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.073729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.073762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.073906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.073940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.074068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.074102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.074254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.074288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.074454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.074489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.074629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.074664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.074782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.074816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.074991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.075025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.075173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.075207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.552 [2024-11-25 13:28:05.075384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.552 [2024-11-25 13:28:05.075421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.552 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.075570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.075605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.075776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.075810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.075926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.075961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.076111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.076147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.076259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.076294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.076449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.076483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.076607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.076648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.076789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.076823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.076944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.076977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.077152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.077188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.077358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.077384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.077529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.077553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.077702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.077738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.077910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.077944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.078088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.078124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.078268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.078310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.078455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.078489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.078664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.078698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.078840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.078874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.078990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.079024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.079182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.079216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.079402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.079438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.079566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.079601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.079751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.079785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.079919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.079953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.080077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.080112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.080235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.080269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.080457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.080492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.080633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.080669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.080818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.080852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.080963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.081000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.081122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.081159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.081277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.081325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.081471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.081505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.081648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.553 [2024-11-25 13:28:05.081684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.553 qpair failed and we were unable to recover it. 00:29:07.553 [2024-11-25 13:28:05.081856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.081892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.082036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.082070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.082207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.082244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.082443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.082481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.082629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.082667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.082833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.082870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.083031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.083067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.083215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.083252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.083415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.083453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.083615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.083651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.083823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.083848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.083968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.083998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.084088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.084138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.084319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.084354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.084468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.084505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.084661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.084698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.084886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.084911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.084997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.085022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.085110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.085136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.085251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.085312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.085495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.085532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.085662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.085697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.085823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.085858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.085983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.086019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.086167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.086203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.086356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.086393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.086515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.086554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.086703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.086740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.086935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.086962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.087061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.087086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.087204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.087230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.087346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.087382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.087534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.087572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.087725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.087762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.087909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.087946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.088102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.088138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.554 [2024-11-25 13:28:05.088287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.554 [2024-11-25 13:28:05.088333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.554 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.088523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.088560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.088698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.088735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.088895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.088931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.089082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.089118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.089282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.089326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.089472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.089510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.089646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.089682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.089836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.089872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.090022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.090061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.090186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.090223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.090380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.090417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.090541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.090577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.090771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.090798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.090935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.090959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.091101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.091143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.091285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.091330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.091457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.091492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.091650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.091686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.091840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.091876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.092053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.092089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.092253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.092288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.092423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.092462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.092634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.092659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.092767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.092793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.092904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.092930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.093021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.093047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.093128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.093153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.093292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.093322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.093466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.093503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.093694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.093731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.093858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.093894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.094021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.094058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.094200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.094238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.094372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.094409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.094590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.094627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.555 [2024-11-25 13:28:05.094780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.555 [2024-11-25 13:28:05.094817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.555 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.094957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.094994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.095174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.095209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.095361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.095399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.095580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.095617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.095744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.095782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.095941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.095979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.096144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.096181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.096315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.096353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.096481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.096520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.096654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.096691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.096815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.096851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.097014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.097051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.097250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.097293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.097506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.097543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.097693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.097730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.097894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.097921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.098074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.098099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.098246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.098289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.098456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.098495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.098658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.098695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.098848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.098886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.099019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.099057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.099238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.099275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.099514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.099555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.099681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.099709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.099817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.099856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.100009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.100048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.100200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.100238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.100408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.100447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.100576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.100615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.100772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.100810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.100952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.100989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.101178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.101216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.101326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.101363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.101496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.101532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.101656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.556 [2024-11-25 13:28:05.101694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.556 qpair failed and we were unable to recover it. 00:29:07.556 [2024-11-25 13:28:05.101889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.101915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.102054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.102080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.102273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.102326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.102498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.102536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.102688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.102725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.102836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.102873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.103057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.103093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.103243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.103279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.103466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.103526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.103732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.103783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.103940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.103982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.104144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.104184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.104347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.104388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.104511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.104552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.104720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.104761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.104928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.104968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.105129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.105171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.105374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.105414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.105535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.105563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.105711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.105749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.105899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.105937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.106129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.106167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.106294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.106339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.106482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.106519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.106717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.106757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.106886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.106925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.107053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.107091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.107248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.107288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.107459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.107498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.107696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.107735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.107928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.107966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.557 [2024-11-25 13:28:05.108092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.557 [2024-11-25 13:28:05.108151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.557 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.108344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.108389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.108550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.108591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.108710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.108750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.108889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.108929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.109077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.109117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.109316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.109357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.109516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.109558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.109685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.109730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.109850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.109877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.110007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.110050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.110190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.110229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.110399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.110439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.110591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.110630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.110777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.110817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.110948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.110987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.111114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.111152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.111313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.111353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.111494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.111541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.111672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.111711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.111873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.111915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.112091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.112130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.112279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.112351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.112501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.112544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.112705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.112745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.112935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.112976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.113129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.113191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.113409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.113451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.113621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.113648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.113744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.113770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.113907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.113950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.114120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.114162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.114309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.114366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.114476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.114510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.114628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.114662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.114838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.558 [2024-11-25 13:28:05.114872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.558 qpair failed and we were unable to recover it. 00:29:07.558 [2024-11-25 13:28:05.114989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.115024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.115144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.115177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.115289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.115337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.115444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.115478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.115622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.115666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.115782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.115808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.115924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.115958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.116059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.116094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.116210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.116244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.116376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.116411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.116524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.116556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.116726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.116759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.116877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.116909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.117025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.117057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.117167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.117200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.117340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.117375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.117479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.117513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.117628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.117661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.117771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.117806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.117947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.117985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.118131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.118166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.118277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.118321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.118505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.118548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.118688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.118722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.118825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.118859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.119044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.119079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.119214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.119247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.119356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.119392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.119533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.119568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.119706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.119739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.119873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.119907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.120069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.120103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.120212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.120256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.120358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.120385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.120563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.559 [2024-11-25 13:28:05.120596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.559 qpair failed and we were unable to recover it. 00:29:07.559 [2024-11-25 13:28:05.120706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.120738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.120889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.120921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.121035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.121067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.121194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.121225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.121359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.121391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.121503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.121533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.121661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.121691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.121821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.121852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.121952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.121981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.122084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.122115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.122241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.122271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.122414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.122444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.122572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.122603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.560 [2024-11-25 13:28:05.122712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.560 [2024-11-25 13:28:05.122742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.560 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.122847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.122878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.122983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.123013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.123163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.123189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.123274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.123300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.123392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.123418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.123511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.123536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.123621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.123646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.123737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.123762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.123840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.123865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.124001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.124041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.124203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.124242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.124401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.124434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.124529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.124560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.124688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.124724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.124892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.124931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.125099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.125150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.125300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.125373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.125500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.125531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.125647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.125687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.125839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.125877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.125996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.126034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.126152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.126189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.126358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.126389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.126488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.126520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.126621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.126650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.126797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.839 [2024-11-25 13:28:05.126835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.839 qpair failed and we were unable to recover it. 00:29:07.839 [2024-11-25 13:28:05.126988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.127035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.127204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.127244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.127405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.127436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.127536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.127567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.127716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.127755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.127916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.127967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.128149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.128191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.128383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.128415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.128511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.128541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.128705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.128743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.128916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.128955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.129128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.129174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.129322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.129370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.129499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.129528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.129680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.129724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.129836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.129887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.130028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.130072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.130224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.130270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.130437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.130469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.130582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.130614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.130714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.130746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.130853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.130885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.131008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.131048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.131214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.131255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.131409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.131442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.131589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.131627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.131766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.131803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.131994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.132039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.132157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.132195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.132385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.132415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.132509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.132539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.132752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.132783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.132914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.132947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.133167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.133207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.133374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.133421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.133591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.133620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.133764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.133792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.133907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.840 [2024-11-25 13:28:05.133960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.840 qpair failed and we were unable to recover it. 00:29:07.840 [2024-11-25 13:28:05.134121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.134162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.134326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.134378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.134512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.134556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.134688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.134715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.134826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.134852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.134966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.135009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.135178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.135218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.135363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.135394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.135518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.135548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.135662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.135694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.135865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.135904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.136107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.136146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.136313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.136366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.136525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.136555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.136711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.136749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.136898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.136938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.137085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.137125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.137300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.137347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.137445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.137476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.137623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.137663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.137824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.137861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.138018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.138056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.140445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.140508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.140723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.140765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.140934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.140974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.141130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.141188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.141340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.141382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.141573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.141612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.141799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.141868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.142061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.142098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.142210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.142239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.142396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.142438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.142568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.142607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.142786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.142826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.142986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.143027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.143152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.143193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.143364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.143404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.143564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.841 [2024-11-25 13:28:05.143604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.841 qpair failed and we were unable to recover it. 00:29:07.841 [2024-11-25 13:28:05.143759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.143800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.143922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.143974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.144077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.144107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.144241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.144282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.144474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.144507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.144635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.144666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.144816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.144858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.144974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.145015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.145142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.145191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.145288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.145335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.145460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.145499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.145683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.145713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.145878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.145908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.146080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.146121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.146288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.146345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.146506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.146548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.146706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.146745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.146952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.146992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.147127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.147170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.147362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.147404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.147572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.147611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.147783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.147816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.147951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.147983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.148184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.148213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.148350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.148381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.148561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.148592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.148703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.148740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.148879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.148921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.149101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.149131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.149240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.149272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.149383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.149414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.149524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.149562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.149695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.149737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.149877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.149916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.150075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.150115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.150264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.150315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.150514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.150554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.842 [2024-11-25 13:28:05.150722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.842 [2024-11-25 13:28:05.150753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.842 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.150889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.150929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.151013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.151039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.151186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.151226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.151381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.151421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.151578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.151619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.151781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.151823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.152020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.152060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.152234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.152275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.152415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.152454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.152604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.152643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.152799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.152841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.153056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.153082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.153203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.153227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.153400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.153444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.153582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.153641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.153778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.153836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.154034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.154065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.154176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.154206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.154378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.154421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.154625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.154666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.154809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.154850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.155036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.155078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.155214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.155256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.155440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.155482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.155630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.155670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.155834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.155873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.156033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.156073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.156196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.156240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.156410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.156451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.156618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.156659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.156847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.156872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.156990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.157015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.843 qpair failed and we were unable to recover it. 00:29:07.843 [2024-11-25 13:28:05.157107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.843 [2024-11-25 13:28:05.157152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.157283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.157342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.157508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.157540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.157669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.157699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.157835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.157874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.158049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.158090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.158249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.158291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.158436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.158476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.158629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.158668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.158827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.158868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.159011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.159051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.159206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.159245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.159391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.159432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.159559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.159600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.159742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.159783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.159933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.159972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.160131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.160172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.160342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.160374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.160475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.160504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.160626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.160669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.160814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.160854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.161015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.161055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.161230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.161261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.161374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.161404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.161550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.161589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.161801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.161826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.161937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.161964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.162106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.162132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.162248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.162274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.162431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.162457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.162584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.162625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.162799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.162842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.163012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.163057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.163192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.163253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.163450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.163494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.163699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.163743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.163882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.163924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.844 [2024-11-25 13:28:05.164106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.844 [2024-11-25 13:28:05.164149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.844 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.164350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.164395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.164550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.164581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.164685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.164715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.164918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.164954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.165123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.165154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.165337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.165379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.165577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.165619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.165774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.165815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.166010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.166069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.166263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.166324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.166526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.166588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.166783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.166828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.166997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.167037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.167199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.167247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.167413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.167446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.167625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.167651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.167796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.167822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.167966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.167997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.168111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.168143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.168329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.168374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.168623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.168687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.168823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.168867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.169053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.169081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.169203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.169230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.169360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.169431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.169611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.169654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.169819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.169862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.170064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.170108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.170270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.170322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.170466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.170511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.170660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.170687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.170846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.170889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.171061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.171103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.171252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.171293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.171489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.171516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.171630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.171658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.171791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.845 [2024-11-25 13:28:05.171832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.845 qpair failed and we were unable to recover it. 00:29:07.845 [2024-11-25 13:28:05.171988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.172030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.172188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.172230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.172381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.172424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.172589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.172631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.172766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.172809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.172969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.173011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.173177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.173227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.173437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.173479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.173643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.173684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.173853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.173897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.174075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.174116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.174337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.174381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.174651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.174715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.174945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.175006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.175152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.175196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.175392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.175464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.175718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.175782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.176015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.176077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.176263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.176313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.176480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.176522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.176706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.176748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.176917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.176961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.177134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.177176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.177365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.177407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.177566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.177608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.177779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.177821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.177954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.177994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.178161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.178202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.178356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.178398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.178534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.178580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.178688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.178715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.178814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.178854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.179022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.179063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.179201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.179243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.179444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.179487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.179653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.179694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.179858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.179899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.180059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-11-25 13:28:05.180085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.846 qpair failed and we were unable to recover it. 00:29:07.846 [2024-11-25 13:28:05.180198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.180225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.180314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.180368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.180528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.180570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.180735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.180778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.180942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.180984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.181176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.181217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.181392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.181435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.181607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.181648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.181802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.181850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.181989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.182031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.182166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.182208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.182377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.182419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.182621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.182662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.182807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.182850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.183023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.183065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.183222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.183264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.183418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.183459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.183666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.183708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.183889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.183933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.184115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.184160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.184350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.184411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.184661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.184726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.184985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.185029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.185202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.185257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.185424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.185467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.185636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.185678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.185868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.185929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.186068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.186114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.186335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.186380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.186567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.186632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.186791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.186833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.186988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.187030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.187186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.187226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.187484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.187546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.187735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.187783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.187912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.187939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.188085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.188129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.188327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-11-25 13:28:05.188371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.847 qpair failed and we were unable to recover it. 00:29:07.847 [2024-11-25 13:28:05.188632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.188694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.188878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.188922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.189099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.189145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.189288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.189357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.189579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.189648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.189871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.189932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.190137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.190180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.190336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.190379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.190491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.190531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.190732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.190773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.190929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.190993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.191186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.191226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.191370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.191417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.191620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.191684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.191925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.191988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.192190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.192234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.192449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.192525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.192795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.192863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.193040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.193083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.193257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.193300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.193491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.193564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.193706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.193749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.194008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.194070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.194272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.194326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.194556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.194620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.194853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.194894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.195042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.195084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.195290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.195346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.195543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.195569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.195685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.195710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.195795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.195846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.195997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.848 [2024-11-25 13:28:05.196038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.848 qpair failed and we were unable to recover it. 00:29:07.848 [2024-11-25 13:28:05.196231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.196288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.196481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.196546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.196719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.196789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.197007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.197072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.197216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.197261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.197474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.197516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.197709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.197749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.197920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.197963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.198140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.198184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.198393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.198438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.198613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.198658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.198849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.198893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.199057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.199098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.199246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.199287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.199468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.199510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.199646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.199687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.199847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.199888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.200040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.200081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.200206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.200249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.200433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.200475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.200632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.200673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.200830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.200872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.200997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.201038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.201211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.201253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.201404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.201448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.201719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.201760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.201957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.202001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.202129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.202172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.202313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.202360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.202624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.202686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.202949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.203012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.203183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.203226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.203422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.203465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.203659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.203700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.203895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.203937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.204064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.204106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.204245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.204285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.204498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.849 [2024-11-25 13:28:05.204525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.849 qpair failed and we were unable to recover it. 00:29:07.849 [2024-11-25 13:28:05.204639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.204667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.204813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.204854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.204980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.205023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.205157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.205202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.205371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.205416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.205606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.205648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.205787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.205828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.205959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.206007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.206176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.206218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.206363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.206407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.206565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.206607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.206739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.206780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.206917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.206959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.207114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.207156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.207324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.207366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.207515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.207556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.207729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.207771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.207977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.208005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.208175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.208216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.208357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.208399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.208567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.208609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.208752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.208794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.208930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.208973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.209098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.209141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.209343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.209387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.209549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.209595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.209685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.209711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.209857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.209898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.210096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.210138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.210296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.210345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.210503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.210544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.210732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.210759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.210869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.210895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.211024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.211066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.211245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.211287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.211464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.211505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.211651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.211693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.211856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.850 [2024-11-25 13:28:05.211897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.850 qpair failed and we were unable to recover it. 00:29:07.850 [2024-11-25 13:28:05.212093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.212135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.212312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.212354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.212522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.212563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.212731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.212772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.212934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.212976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.213146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.213187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.213357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.213403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.213513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.213540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.213669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.213711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.213875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.213923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.214063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.214107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.214258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.214313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.214489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.214533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.214719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.214761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.214926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.214967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.215103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.215146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.215312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.215354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.215519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.215561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.215725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.215767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.215965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.216007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.216166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.216207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.216374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.216416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.216619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.216661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.216890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.216933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.217074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.217116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.217299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.217352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.217521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.217562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.217758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.217799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.217935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.217980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.218171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.218211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.218384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.218426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.218577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.218619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.218810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.218851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.218977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.219018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.219224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.219265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.219486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.219551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.219760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.219787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.219896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.851 [2024-11-25 13:28:05.219923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.851 qpair failed and we were unable to recover it. 00:29:07.851 [2024-11-25 13:28:05.220036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.220063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.220148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.220174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.220258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.220286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.220398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.220424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.220512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.220539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.220674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.220715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.220849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.220891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.221051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.221092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.221227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.221267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.221433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.221494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.221674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.221739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.221881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.221940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.222110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.222156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.222341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.222369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.222490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.222517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.222666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.222710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.222862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.222904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.223050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.223092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.223222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.223271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.223371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.223398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.223515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.223553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.223714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.223755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.223953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.223995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.224155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.224197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.224377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.224416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.224572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.224610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.224760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.224802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.224934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.224975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.225104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.225146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.225329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.225367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.225513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.225555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.225732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.225773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.225904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.225947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.226138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.226180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.226370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.226409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.226586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.226627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.226789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.852 [2024-11-25 13:28:05.226832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.852 qpair failed and we were unable to recover it. 00:29:07.852 [2024-11-25 13:28:05.226982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.227024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.227187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.227226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.227408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.227466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.227654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.227701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.227836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.227881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.228066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.228114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.228282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.228337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.228487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.228525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.228782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.228848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.229110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.229177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.229387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.229427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.229554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.229593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.229844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.229910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.230166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.230232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.230467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.230514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.230660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.230731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.230941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.231007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.231219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.231264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.231438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.231477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.231728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.231771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.231948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.231990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.232261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.232314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.232500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.232538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.232697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.232756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.232934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.233002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.233227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.233294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.233502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.233541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.233740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.233805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.234085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.234128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.234253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.234297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.234459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.234498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.234730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.234797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.235067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.235109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.853 [2024-11-25 13:28:05.235275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.853 [2024-11-25 13:28:05.235341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.853 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.235498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.235536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.235717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.235762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.235943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.236010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.236299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.236357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.236482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.236520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.236679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.236716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.236931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.236999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.237321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.237382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.237513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.237552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.237726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.237769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.238101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.238167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.238371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.238417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.238596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.238677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.238977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.239044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.239298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.239345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.239504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.239542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.239759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.239803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.240013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.240082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.240289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.240378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.240555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.240599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.240814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.240867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.241117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.241144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.241262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.241289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.241449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.241491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.241693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.241757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.241989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.242035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.242245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.242328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.242499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.242541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.242713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.242757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.242903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.242946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.243116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.243162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.243523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.243566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.243737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.243780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.244050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.244077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.244219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.244246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.244437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.244481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.244658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.854 [2024-11-25 13:28:05.244716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.854 qpair failed and we were unable to recover it. 00:29:07.854 [2024-11-25 13:28:05.244973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.245034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.245281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.245360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.245604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.245670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.245975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.246035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.246339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.246408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.246666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.246731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.246977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.247038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.247339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.247378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.247511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.247549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.247730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.247796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.248044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.248112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.248416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.248458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.248633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.248680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.248817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.248845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.249004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.249070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.249346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.249385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.249513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.249550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.249683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.249720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.249935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.250001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.250294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.250376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.250630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.250695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.250953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.251019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.251285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.251364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.251569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.251648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.251886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.251931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.252097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.252151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.252286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.252333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.252556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.252621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.252878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.252942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.253192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.253260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.253578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.253643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.253899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.253968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.254233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.254300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.254593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.254658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.254886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.254913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.255137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.255206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.255526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.255592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.255900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.855 [2024-11-25 13:28:05.255966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.855 qpair failed and we were unable to recover it. 00:29:07.855 [2024-11-25 13:28:05.256230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.256294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.256531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.256596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.256887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.256952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.257176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.257242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.257547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.257613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.257851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.257894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.258036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.258078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.258238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.258323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.258632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.258697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.258915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.258981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.259239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.259322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.259534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.259600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.259870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.259935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.260142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.260207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.260484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.260551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.260828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.260869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.261039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.261120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.261413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.261481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.261728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.261794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.262045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.262113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.262411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.262455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.262625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.262668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.262963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.263030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.263330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.263398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.263651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.263692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.263856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.263905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.264041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.264083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.264340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.264407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.264662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.264732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.265043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.265108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.265363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.265433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.265684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.265751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.266024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.266065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.266318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.266386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.266599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.266666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.266949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.267014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.267314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.267382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.856 [2024-11-25 13:28:05.267631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.856 [2024-11-25 13:28:05.267698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.856 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.267939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.268006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.268317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.268384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.268639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.268705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.268953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.269021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.269360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.269428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.269687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.269753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.269988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.270030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.270189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.270231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.270512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.270539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.270646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.270674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.270761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.270823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.271024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.271088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.271333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.271399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.271700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.271766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.271963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.272031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.272270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.272347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.272640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.272707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.273004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.273070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.273368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.273435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.273751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.273817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.274056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.274122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.274335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.274401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.274691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.274757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.275059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.275125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.275331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.275400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.275609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.275678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.275976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.276017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.276154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.276220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.276477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.276516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.276649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.276687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.276839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.276877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.277141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.277183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.277365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.277434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.857 [2024-11-25 13:28:05.277683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.857 [2024-11-25 13:28:05.277720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.857 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.277877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.277913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.278160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.278237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.278506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.278584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.278850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.278919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.279212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.279278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.279579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.279646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.279902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.279969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.280248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.280326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.280592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.280658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.280919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.280988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.281244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.281335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.281640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.281706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.281970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.282036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.282251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.282343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.282582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.282909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.282948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.283105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.283143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.283334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.283404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.283699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.283767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.284022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.284091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.284393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.284437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.284596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.284655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.284810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.284848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.285151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.285225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.285492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.285538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.285712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.285770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.286007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.286045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.286207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.286245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.286503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.286580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.286880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.286947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.287245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.287321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.287538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.287614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.287845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.287912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.288193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.288243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.288454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.288521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.288741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.288807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.289100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.289127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.858 [2024-11-25 13:28:05.289240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.858 [2024-11-25 13:28:05.289266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.858 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.289454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.289521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.289786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.289852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.290102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.290139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.290271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.290316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.290489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.290554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.290767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.290831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.291087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.291153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.291439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.291478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.291605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.291643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.291826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.291893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.292114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.292180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.292475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.292541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.292809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.292873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.293153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.293181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.293293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.293336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.293445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.293486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.293722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.293788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.294018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.294083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.294337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.294405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.294609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.294675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.294953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.295019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.295271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.295325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.295507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.295574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.295831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.295896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.296119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.296190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.296457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.296527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.296815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.296881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.297146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.297213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.297482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.297551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.297820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.297887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.298193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.298261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.298615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.298681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.298974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.299040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.299342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.299410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.299675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.299717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.299913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.299944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.300036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.300065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.859 [2024-11-25 13:28:05.300212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.859 [2024-11-25 13:28:05.300288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.859 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.300572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.300645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.300906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.300971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.301225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.301291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.301576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.301642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.301895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.301974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.302229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.302299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.302576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.302620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.302755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.302798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.303059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.303130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.303372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.303442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.303726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.303754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.303876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.303902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.303995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.304020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.304141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.304168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.304278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.304360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.304597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.304662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.304966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.305031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.305354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.305422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.305685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.305723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.305872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.305912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.306133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.306199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.306475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.306518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.306693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.306735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.307048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.307113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.307420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.307488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.307743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.307809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.308039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.308104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.308355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.308423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.308726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.308792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.309050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.309115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.309386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.309454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.309704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.309772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.310065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.310130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.310386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.310453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.310733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.310798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.311001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.311067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.311338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.311405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.860 [2024-11-25 13:28:05.311665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.860 [2024-11-25 13:28:05.311710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.860 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.311867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.311905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.312130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.312195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.312501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.312569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.312771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.312837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.313124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.313188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.313463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.313505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.313697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.313763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.314016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.314082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.314333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.314403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.314632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.314699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.314988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.315054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.315334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.315378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.315592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.315658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.315917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.315958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.316116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.316155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.316404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.316471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.316757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.316824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.317111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.317178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.317436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.317504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.317811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.317878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.318137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.318204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.318482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.318550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.318802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.318869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.319150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.319178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.319288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.319321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.319462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.319504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.319793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.319892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.320205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.320275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.320553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.320592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.320781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.320836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.321091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.321154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.321400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.321466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.321759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.321824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.322073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.322136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.322445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.322511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.322810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.322875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.323163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.323227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.323474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.861 [2024-11-25 13:28:05.323540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.861 qpair failed and we were unable to recover it. 00:29:07.861 [2024-11-25 13:28:05.323820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.323885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.324153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.324217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.324471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.324539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.324788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.324853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.325056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.325119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.325380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.325447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.325742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.325808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.326026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.326090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.326330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.326396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.326588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.326652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.326879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.326944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.327210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.327276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.327547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.327611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.327898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.327962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.328254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.328332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.328564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.328640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.328893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.328958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.329222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.329287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.329571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.329636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.329861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.329926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.330223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.330288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.330572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.330636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.330883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.330947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.331226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.331291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.331577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.331618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.331863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.331929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.332196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.332261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.332531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.332558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.332671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.332712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.332851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.332879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.332975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.333002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.333190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.333256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.333543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.333585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.333734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.862 [2024-11-25 13:28:05.333809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.862 qpair failed and we were unable to recover it. 00:29:07.862 [2024-11-25 13:28:05.334052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.334116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.334392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.334459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.334715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.334781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.335066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.335153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.335412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.335455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.335616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.335680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.335917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.335982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.336251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.336292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.336574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.336651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.336912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.336976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.337215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.337279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.337554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.337619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.337836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.337901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.338189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.338253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.338514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.338556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.338757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.338828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.339091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.339159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.339454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.339521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.339810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.339876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.340157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.340199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.340429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.340496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.340790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.340855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.341131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.341197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.341468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.341535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.341799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.341841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.342018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.342064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.342246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.342332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.342592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.342659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.342946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.343012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.343270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.343355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.343644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.343711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.343985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.344049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.344366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.344433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.344661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.344726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.344952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.345017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.345325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.345392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.345695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.345760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.346071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.346139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.863 qpair failed and we were unable to recover it. 00:29:07.863 [2024-11-25 13:28:05.346384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.863 [2024-11-25 13:28:05.346452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.346719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.346783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.347025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.347093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.347395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.347462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.347722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.347800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.348068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.348135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.348401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.348469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.348722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.348763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.348932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.349006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.349317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.349385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.349639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.349703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.350009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.350075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.350321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.350364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.350573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.350627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.350861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.350927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.351219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.351285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.351532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.351598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.351850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.351916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.352224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.352288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.352596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.352661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.352917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.352984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.353219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.353285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.353536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.353602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.353815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.353880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.354129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.354189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.354350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.354409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.354672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.354739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.355002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.355072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.355388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.355431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.355571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.355613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.355758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.355785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.355901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.355928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.356158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.356223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.356524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.356591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.356882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.356947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.357193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.357257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.357523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.357588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.357797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.357865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.864 [2024-11-25 13:28:05.358131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.864 [2024-11-25 13:28:05.358207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.864 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.358564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.358632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.358920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.358962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.359092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.359134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.359358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.359424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.359644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.359720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.360018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.360084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.360374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.360442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.360734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.360800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.361024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.361091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.361341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.361412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.361697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.361739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.361910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.361991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.362286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.362339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.362492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.362533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.362830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.362896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.363182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.363221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.363358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.363395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.363620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.363658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.363820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.363858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.364106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.364173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.364419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.364488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.364776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.364842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.365094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.365159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.365447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.365515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.365817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.365882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.366143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.366214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.366441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.366519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.366788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.366827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.366991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.367029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.367327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.367371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.367545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.367588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.367852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.367917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.368184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.368249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.368546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.368584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.368772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.368826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.369110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.369175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.369467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.369496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.369614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.865 [2024-11-25 13:28:05.369640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.865 qpair failed and we were unable to recover it. 00:29:07.865 [2024-11-25 13:28:05.369817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.369887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.370112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.370152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.370371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.370437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.370681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.370747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.370985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.371051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.371329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.371397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.371646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.371712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.371982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.372022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.372223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.372289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.372539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.372604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.372865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.372932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.373205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.373271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.373574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.373640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.373896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.373960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.374247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.374331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.374591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.374675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.374953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.374995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.375214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.375281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.375516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.375542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.375655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.375696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.375813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.375837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.376027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.376092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.376388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.376457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.376755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.376797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.376964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.377006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.377276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.377360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.377623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.377688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.377891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.377955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.378185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.378249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.378528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.378594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.378883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.378948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.379232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.379297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.379573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.379639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.379834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.379899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.866 [2024-11-25 13:28:05.380167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.866 [2024-11-25 13:28:05.380231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.866 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.380522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.380587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.380796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.380861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.381118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.381182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.381441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.381471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.381587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.381614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.381788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.381829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.382079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.382144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.382390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.382458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.382714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.382778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.383019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.383087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.383332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.383400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.383667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.383731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.383979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.384044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.384360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.384427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.384646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.384711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.384998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.385061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.385360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.385389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.385506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.385534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.385669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.385696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.385853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.385920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.386167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.386235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.386528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.386556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.386672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.386698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.386805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.386871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.387137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.387204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.387507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.387574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.387865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.387931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.388229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.388295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.388535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.388601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.388894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.388960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.389223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.389290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.389618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.389660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.389833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.389875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.390127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.390191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.390458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.390526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.390817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.390858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.390992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.391056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.391320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.867 [2024-11-25 13:28:05.391348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.867 qpair failed and we were unable to recover it. 00:29:07.867 [2024-11-25 13:28:05.391485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.391512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.391619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.391645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.391800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.391865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.392075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.392139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.392387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.392430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.392593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.392635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.392799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.392843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.393143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.393208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.393497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.393539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.393726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.393793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.394018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.394094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.394389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.394455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.394720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.394785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.395074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.395116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.395360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.395428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.395715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.395780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.396063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.396127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.396419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.396486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.396728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.396796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.397045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.397113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.397376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.397444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.397696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.397737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.397865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.397906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.398086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.398151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.398370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.398437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.398686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.398750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.399002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.399067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.399326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.399396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.399687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.399728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.399934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.400003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.400239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.400324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.400614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.400679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.400936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.401001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.401222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.401287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.401514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.401579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.401862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.401927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.402169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.402208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.402396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.402466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.868 [2024-11-25 13:28:05.402762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.868 [2024-11-25 13:28:05.402827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.868 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.403060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.403126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.403365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.403424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.403595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.403637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.403843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.403909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.404194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.404258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.404558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.404623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.404845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.404910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.405188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.405269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.405541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.405606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.405848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.405890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.406087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.406153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.406395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.406423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.406544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.406571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.406658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.406684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.406777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.406804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.406900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.406951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.407154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.407217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.407454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.407517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.407715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.407777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.408016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.408078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.408358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.408425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.408631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.408700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.408986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.409051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.409277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.409357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.409557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.409623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.409913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.409954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.410178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.410244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.410510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.410578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.410840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.410882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.411018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.411062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.411345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.411412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.411617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.411682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.411915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.411979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.412225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.412291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.412600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.412665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.412876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.412941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.413202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.413269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.869 [2024-11-25 13:28:05.413572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.869 [2024-11-25 13:28:05.413636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.869 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.413887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.413953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.414213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.414278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.414608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.414672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.414970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.415035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.415301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.415386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.415655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.415720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.415980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.416046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.416369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.416435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.416649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.416714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.416947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.416973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.417089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.417117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.417330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.417395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.417686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.417752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.418020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.418084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.418380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.418448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.418688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.418747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.419002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.419067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.419328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.419370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.419599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.419675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.419848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.419883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.420007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.420043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.420165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.420201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.420336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.420379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.420513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.420548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.420723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.420758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.420875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.420909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.421026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.421062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.421187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.421222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.421342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.421385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.421515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.421550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.421665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.421700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.421846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.421882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.422002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.422037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.422184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.422221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.422369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.422406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.422555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.422590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.422738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.422775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.422913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.870 [2024-11-25 13:28:05.422949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.870 qpair failed and we were unable to recover it. 00:29:07.870 [2024-11-25 13:28:05.423094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.423129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.423274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.423319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.423432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.423468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.423582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.423629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.423740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.423766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.423846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.423872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.423949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.423974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.424063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.424089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.424177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.424223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.424376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.424412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.424525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.424560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.424686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.424721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.424873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.424908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.425025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.425060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.425210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.425246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.425391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.425427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.425524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.425559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.425668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.425709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.425838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.425873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.426016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.426050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.426181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.426218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.426464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.426530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.426778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.426867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.427108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.427175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.427381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.427448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.427688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.427753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.428009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.428073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.428331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.428397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.428648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.428713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.428999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.429063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.429350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.429417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.429644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.429712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.429970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.430034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.430277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.430366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.430629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.871 [2024-11-25 13:28:05.430695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.871 qpair failed and we were unable to recover it. 00:29:07.871 [2024-11-25 13:28:05.430948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.431012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.431320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.431386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.431632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.431699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.431965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.432030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.432278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.432376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.432622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.432687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.432932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.432996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.433207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.433273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.433591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.433618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.433737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.433766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.433907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.433934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.434132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.434200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.434499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.434565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.434789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.434854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.435110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.435151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.435366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.435448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.435694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.435760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.435999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.436064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.436338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.436404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.436635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.436701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.436931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.436995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.437215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.437283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.437560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.437627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.437882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.437951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.438163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.438230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.438467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.438537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.438806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.438871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.439104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.439168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.439428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.439496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.439800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.439869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.440131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.440196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.440507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.440574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.440872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.440899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.441038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.441066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.441246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.441344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.441584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.441651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.441914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.441980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.442238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.442319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.442582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.872 [2024-11-25 13:28:05.442647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.872 qpair failed and we were unable to recover it. 00:29:07.872 [2024-11-25 13:28:05.442949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.442976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.443118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.443155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.443346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.443420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.443663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.443728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.443923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.443959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.444098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.444151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.444416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.444482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.444721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.444786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.445073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.445120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.445268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.445310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.445449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.445502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.445807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.445875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.446126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.446190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.446487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.446563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.446809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.446875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.447123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.447187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.447439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.447507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.447752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.447818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.448054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.448118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.448411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.448448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.448604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.448640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.448893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.448957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.449254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.449340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.449618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.449684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.449980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.450054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.450371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.450439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.450711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.450776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.450987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.451014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.451170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.451197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.451429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.451495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.451750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.451815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.452096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.452170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.452440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.452519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.452766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.452831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.453117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.453183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.453441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.453507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.453775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.453839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.454106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.454171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.873 [2024-11-25 13:28:05.454456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.873 [2024-11-25 13:28:05.454533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.873 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.454757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.454824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.455074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.455140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.455436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.455505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.455772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.455837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.456090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.456155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.456459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.456527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.456775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.456844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.457074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.457141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.457439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.457477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.457591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.457628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.457911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.457976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.458265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.458349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.458604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.458670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.458938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.459003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.459252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.459343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.459640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.459715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.459982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.460048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.460349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.460419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.460716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.460780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.461074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.461139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.461400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.461467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.461730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.461795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.462094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.462158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.462419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.462491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.462781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.462846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.463098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.463163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.463472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.463549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.463777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.463843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.464132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.464198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.464439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.464505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.464791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.464858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.465148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.465214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.465560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.465626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.465928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.465993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.466253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.466337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.466608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.466674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.466899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.466965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.874 [2024-11-25 13:28:05.467165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.874 [2024-11-25 13:28:05.467235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.874 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.467545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.467613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.467872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.467938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.468237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.468340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.468611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.468677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.468966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.469032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.469344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.469412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.469701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.469768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.470079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.470144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.470400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.470469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.470765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.470832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.471089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.471156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.471376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.471443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.471731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.471797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.472061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.472137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.472385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.472451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.472630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.472695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.473007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.473073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.473337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.473404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.473669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.473738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.474031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.474058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.474206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.474232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.474440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.474478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.474600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.474636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.474873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.474938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.475225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.475290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.475609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.475673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.475938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.476004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.476257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.476349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.476654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.476719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.476997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.477063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:07.875 [2024-11-25 13:28:05.477276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.875 [2024-11-25 13:28:05.477364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:07.875 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.477640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.477706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.477997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.478063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.478366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.478434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.478695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.478761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.479053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.479118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.479396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.479463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.479701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.479767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.479979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.480045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.480333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.480401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.480646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.480712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.480923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.480992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.481239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.481322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.481587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.481655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.481910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.481975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.482178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.482244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.482500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.482567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.482776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.482855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.483076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.483141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.483424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.483453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.483573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.483598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.483815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.483879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.484190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.484255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.484594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.484661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.484952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.485017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.485324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.485412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.485651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.485728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.486020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.486084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.486345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.486420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.486711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.486776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.487028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.487093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.487390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.154 [2024-11-25 13:28:05.487457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.154 qpair failed and we were unable to recover it. 00:29:08.154 [2024-11-25 13:28:05.487761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.487826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.488049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.488114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.488329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.488398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.488657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.488721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.488957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.489022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.489333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.489402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.489660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.489724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.490028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.490093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.490409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.490475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.490783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.490847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.491138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.491203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.491482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.491509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.491601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.491627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.491744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.491770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.491934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.491999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.492269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.492373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.492688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.492753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.493004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.493069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.493376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.493442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.493739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.493804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.494054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.494118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.494402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.494478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.494770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.494806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.494933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.494968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.495211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.495276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.495541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.495606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.495887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.495966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.496229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.496295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.496612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.496679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.496936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.497002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.497258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.497361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.497659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.497725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.497983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.498047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.498267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.498353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.498614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.498678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.498961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.499026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.499334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.499401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.155 [2024-11-25 13:28:05.499655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.155 [2024-11-25 13:28:05.499725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.155 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.500013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.500077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.500345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.500415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.500679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.500745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.501035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.501099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.501402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.501438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.501620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.501656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.501894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.501961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.502218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.502286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.502616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.502681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.502972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.503039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.503341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.503417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.503678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.503742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.503989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.504057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.504365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.504393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.504532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.504558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.504643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.504670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.504768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.504796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.505013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.505078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.505378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.505445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.505729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.505795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.506053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.506117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.506414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.506480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.506777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.506842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.507082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.507146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.507445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.507511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.507807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.507871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.508119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.508184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.508401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.508467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.508763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.508826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.509120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.509185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.509475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.509542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.509761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.509826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.510122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.510186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.510449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.510515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.510800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.510863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.511091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.511156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.511438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.511505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.511757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.156 [2024-11-25 13:28:05.511821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.156 qpair failed and we were unable to recover it. 00:29:08.156 [2024-11-25 13:28:05.512115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.512181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.512439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.512506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.512769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.512836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.513089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.513126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.513278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.513326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.513585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.513649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.513908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.513952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.514163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.514226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.514526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.514591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.514878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.514942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.515214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.515277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.515530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.515595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.515881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.515946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.516189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.516254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.516602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.516666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.516979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.517045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.517322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.517393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.517679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.517743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.518024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.518088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.518336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.518412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.518682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.518746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.518994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.519062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.519336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.519404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.519678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.519742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.520041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.520105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.520409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.520476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.520766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.520830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.521136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.521202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.521485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.521551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.521761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.521799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.521929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.521965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.522132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.522197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.522483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.522548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.522802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.522868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.523131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.523197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.523471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.523537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.523763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.523827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.524083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.524118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.157 [2024-11-25 13:28:05.524220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.157 [2024-11-25 13:28:05.524255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.157 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.524396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.524472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.524667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.524749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.524998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.525064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.525349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.525385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.525560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.525596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.525841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.525906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.526141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.526205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.526392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.526458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.526709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.526773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.527063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.527127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.527381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.527447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.527738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.527804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.528099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.528164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.528473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.528540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.528833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.528897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.529169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.529234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.529542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.529608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.529854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.529922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.530143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.530208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.530491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.530558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.530781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.530846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.531135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.531201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.531481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.531547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.531804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.531868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.532120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.532185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.532466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.532532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.532818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.532882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.533126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.533191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.533456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.533533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.533819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.533884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.534102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.534166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.534449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.534515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.534821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.158 [2024-11-25 13:28:05.534890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.158 qpair failed and we were unable to recover it. 00:29:08.158 [2024-11-25 13:28:05.535129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.535194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.535482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.535553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.535844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.535910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.536160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.536225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.536532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.536598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.536901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.536937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.537058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.537094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.537236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.537272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.537580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.537616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.537745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.537784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.537967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.538040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.538249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.538335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.538567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.538631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.538924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.538990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.539232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.539323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.539619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.539684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.539977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.540013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.540163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.540199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.540409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.540474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.540694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.540759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.541052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.541117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.541415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.541481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.541772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.541837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.542138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.542204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.542481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.542546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.542842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.542906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.543191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.543256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.543531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.543596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.543876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.543940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.544195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.544260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.544513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.544577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.544837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.544902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.545152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.545218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.545501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.545567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.545861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.545925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.546186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.546222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.547864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.547948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.548215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.548281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.548614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.159 [2024-11-25 13:28:05.548681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.159 qpair failed and we were unable to recover it. 00:29:08.159 [2024-11-25 13:28:05.548940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.549010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.549297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.549378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.549641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.549706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.550000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.550065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.550337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.550403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.550664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.550729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.551015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.551080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.551301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.551405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.551670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.551735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.552028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.552093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.552386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.552453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.552759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.552828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.553039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.553105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.553394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.553461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.553734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.553798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.554092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.554156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.554459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.554525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.554832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.554896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.555156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.555220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.555535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.555600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.555798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.555862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.556125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.556190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.556505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.556542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.556693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.556728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.556953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.557031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.557336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.557404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.557658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.557722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.557989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.558053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.558322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.558390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.558644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.558711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.558967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.559031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.559299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.559385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.559645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.559708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.560003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.560067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.560364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.560432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.560729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.560794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.561049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.561112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.561408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.160 [2024-11-25 13:28:05.561474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.160 qpair failed and we were unable to recover it. 00:29:08.160 [2024-11-25 13:28:05.561717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.561785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.562000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.562068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.562296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.562377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.562638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.562703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.562998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.563061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.563356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.563422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.563704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.563780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.564068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.564114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.564271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.564319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.564579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.564644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.564937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.565002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.565297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.565381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.565606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.565671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.565959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.566033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.566338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.566406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.566675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.566740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.566956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.567020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.567316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.567383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.567631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.567698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.567957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.568023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.568295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.568387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.568634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.568698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.568993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.569057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.569271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.569352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.569578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.569643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.569868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.569936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.570196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.570259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.570551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.570616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.570884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.570949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.571210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.571273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.571581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.571646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.571864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.571928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.572216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.572280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.572601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.572666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.572960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.573025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.573288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.573378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.573638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.573713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.574004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.161 [2024-11-25 13:28:05.574069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.161 qpair failed and we were unable to recover it. 00:29:08.161 [2024-11-25 13:28:05.574369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.574435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.574694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.574770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.575076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.575151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.575402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.575469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.575752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.575816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.576065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.576130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.576429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.576495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.576742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.576808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.577111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.577176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.577465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.577531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.577820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.577856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.578029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.578065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.578323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.578389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.578684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.578749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.578992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.579058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.579361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.579427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.579682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.579750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.580040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.580105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.580361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.580428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.580682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.580749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.581011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.581077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.581339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.581407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.581709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.581773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.581994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.582057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.582346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.582412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.582681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.582745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.583040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.583104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.583404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.583471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.583734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.583802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.584060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.584125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.584434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.584500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.584790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.584855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.585109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.585173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.585434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.585500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.585779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.585846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.586108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.586172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.586412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.586478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.586696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.162 [2024-11-25 13:28:05.586761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.162 qpair failed and we were unable to recover it. 00:29:08.162 [2024-11-25 13:28:05.587050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.587114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.587338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.587408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.587666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.587732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.588031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.588095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.588393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.588461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.588688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.588754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.588944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.589009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.589221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.589286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.589632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.589698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.589946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.590010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.590297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.590388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.590640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.590706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.590963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.591028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.591291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.591375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.591640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.591704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.591989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.592054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.592346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.592414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.594046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.594122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.594416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.594483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.594800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.594866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.595155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.595222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.595499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.595566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.595806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.595870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.596125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.596194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.596486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.596554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.596858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.596921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.597204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.597270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.597553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.597619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.597863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.597929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.598228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.598293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.598638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.598674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.598851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.598886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.599181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.599255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.599557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.599624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.163 qpair failed and we were unable to recover it. 00:29:08.163 [2024-11-25 13:28:05.599861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.163 [2024-11-25 13:28:05.599924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.600180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.600244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.600500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.600535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.600702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.600737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.600905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.600940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.601085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.601120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.601387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.601455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.601675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.601739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.601976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.602041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.602298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.602397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.602699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.602763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.603013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.603082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.603353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.603423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.603684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.603748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.604036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.604100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.604366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.604436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.604699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.604763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.605013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.605078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.605361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.605427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.605695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.605759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.605959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.606026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.606328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.606395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.606658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.606726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.606967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.607031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.607249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.607328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.607550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.607634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.607939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.608003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.608253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.608342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.608573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.608637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.608822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.608886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.609143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.609206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.611285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.611383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.612695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.612726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.612936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.612989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.613725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.613756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.613974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.614027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.614732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.614777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.164 [2024-11-25 13:28:05.615009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.164 [2024-11-25 13:28:05.615061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.164 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.615204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.615237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.615430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.615498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.615649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.615698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.615876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.615938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.616074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.616101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.616224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.616250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.616376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.616404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.616553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.616610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.616758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.616816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.616936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.616962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.617049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.617076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.617183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.617210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.617298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.617332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.617439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.617466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.617588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.617614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.617759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.617786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.617900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.617927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.618036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.618062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.618176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.618203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.618293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.618327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.618425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.618451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.618538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.618565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.618679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.618706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.618788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.618814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.618908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.618935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.619030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.619056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.619199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.619226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.619354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.619382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.619502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.619530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.619649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.619674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.619750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.619776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.619863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.619888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.620026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.620053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.620166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.620195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.620315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.620342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.620434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.620459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.620553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.165 [2024-11-25 13:28:05.620578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.165 qpair failed and we were unable to recover it. 00:29:08.165 [2024-11-25 13:28:05.620691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.620716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.620850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.620876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.620966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.620993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.621097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.621122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.621237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.621262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.621412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.621453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.621566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.621606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.621741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.621767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.621884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.621911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.622050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.622076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.622191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.622218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.625973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.626013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.626186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.626218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.626374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.626401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.626541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.626567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.626667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.626693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.626811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.626836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.626941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.626970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.627075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.627104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.627255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.627284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.627449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.627476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.627607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.627637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.627801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.627830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.627990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.628019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.628138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.628166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.628310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.628336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.628439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.628464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.628583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.628609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.628736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.628764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.628886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.628912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.629004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.629032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.629114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.629145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.629264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.629291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.629419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.629444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.629557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.629583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.629693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.629720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.629852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.629881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.630046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.166 [2024-11-25 13:28:05.630076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.166 qpair failed and we were unable to recover it. 00:29:08.166 [2024-11-25 13:28:05.630234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.630260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.630351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.630376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.630471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.630497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.630589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.630614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.630762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.630787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.630874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.630919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.631048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.631073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.631198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.631223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.631330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.631356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.631469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.631495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.631577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.631602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.631691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.631717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.631804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.631830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.631911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.631938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.632043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.632069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.632157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.632184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.632319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.632345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.632459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.632486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.632573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.632598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.632741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.632768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.632890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.632916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.633065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.633090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.633210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.633237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.633365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.633391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.633475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.633500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.633585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.633612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.633710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.633737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.633850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.633876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.633993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.634019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.634169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.634194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.634308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.634335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.634454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.634480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.634576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.634601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.634750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.634792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.167 qpair failed and we were unable to recover it. 00:29:08.167 [2024-11-25 13:28:05.634887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.167 [2024-11-25 13:28:05.634912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.635026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.635053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.635131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.635156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.635251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.635277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.635374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.635400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.635483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.635509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.635659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.635684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.635793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.635819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.635930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.635955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.636042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.636067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.636176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.636202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.636288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.636326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.636408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.636450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.636583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.636609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.636695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.636720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.636865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.636891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.637004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.637032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.637183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.637209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.637316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.637361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.637494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.637520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.637642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.637671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.637805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.637832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.637956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.637981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.638133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.638159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.638312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.638356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.638497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.638522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.638663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.638711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.638839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.638867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.638985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.639012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.639122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.639150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.639238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.639265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.639378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.639406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.639552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.639579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.639708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.639733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.639846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.639871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.639960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.639985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.640093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.640119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.640216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.640255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.168 [2024-11-25 13:28:05.640370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.168 [2024-11-25 13:28:05.640399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.168 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.640492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.640525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.640617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.640643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.640756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.640782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.640919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.640945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.641031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.641057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.641170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.641195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.641318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.641345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.641429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.641454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.641530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.641555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.641678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.641705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.641783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.641809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.641891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.641918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.642027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.642053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.642163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.642188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.642334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.642364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.642479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.642506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.642622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.642649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.642741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.642768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.642882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.642908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.642998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.643027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.643158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.643185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.643309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.643336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.643422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.643447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.643532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.643558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.643671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.643697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.643789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.643814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.643920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.643946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.644038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.644065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.644171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.644196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.644317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.644364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.644461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.644491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.644594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.644624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.644723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.644752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.644873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.644907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.645026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.645073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.645236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.645277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.645464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.645494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.169 [2024-11-25 13:28:05.645620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.169 [2024-11-25 13:28:05.645649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.169 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.645747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.645777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.645940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.645986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.646115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.646152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.646257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.646291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.646450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.646479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.646584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.646612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.646723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.646770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.646887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.646915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.647054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.647102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.647245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.647277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.647387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.647421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.647576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.647610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.647717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.647752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.647897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.647931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.648043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.648078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.648236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.648272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.648446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.648480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.648637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.648667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.648763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.648809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.648995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.649048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.649179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.649230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.649381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.649411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.649536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.649565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.649751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.649783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.649896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.649928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.650048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.650080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.650184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.650229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.650367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.650396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.650495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.650523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.650701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.650763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.650912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.650960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.651124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.651158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.651340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.651369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.651463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.651493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.651613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.651663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.651832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.651892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.652088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.652155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.170 [2024-11-25 13:28:05.652309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.170 [2024-11-25 13:28:05.652373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.170 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.652492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.652543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.652768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.652818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.652924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.652958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.653091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.653138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.653289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.653331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.653455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.653485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.653608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.653661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.653806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.653858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.653992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.654039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.654166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.654196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.654331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.654363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.654496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.654525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.654697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.654752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.654929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.654994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.655137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.655170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.655328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.655378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.655477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.655508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.655664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.655711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.655898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.655945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.656042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.656070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.656201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.656229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.656336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.656385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.656507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.656568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.656762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.656818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.656959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.657020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.657212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.657245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.657379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.657408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.657543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.657593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.657765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.657819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.657940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.657968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.658093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.658121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.658231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.658275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.658441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.658473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.658571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.658620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.658754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.658804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.659016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.659084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.659249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.171 [2024-11-25 13:28:05.659281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.171 qpair failed and we were unable to recover it. 00:29:08.171 [2024-11-25 13:28:05.659405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.659434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.659524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.659552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.659715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.659748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.659870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.659920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.660099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.660132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.660258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.660290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.660431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.660460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.660582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.660633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.660793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.660827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.660968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.661002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.661141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.661174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.661294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.661352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.661468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.661498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.661637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.661680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.661912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.661965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.662087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.662134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.662286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.662360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.662461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.662489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.662657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.662685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.662846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.662904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.663042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.663076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.663193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.663222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.663351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.663380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.663472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.663502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.663653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.663682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.663834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.663865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.664005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.664055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.664203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.664237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.664398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.664427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.664548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.664577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.664716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.664765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.664930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.664963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.665129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.665163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.665344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.665373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.172 qpair failed and we were unable to recover it. 00:29:08.172 [2024-11-25 13:28:05.665503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.172 [2024-11-25 13:28:05.665533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.665700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.665732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.665936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.665991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.666137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.666170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.666329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.666373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.666475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.666504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.666591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.666620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.666721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.666753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.666953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.666986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.667157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.667191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.667346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.667377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.667530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.667558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.667698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.667730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.667879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.667908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.668110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.668143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.668288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.668346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.668499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.668527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.668659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.668687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.668783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.668829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.668958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.669004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.669193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.669227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.669390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.669419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.669540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.669569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.669708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.669756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.669926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.669959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.670103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.670137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.670334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.670364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.670477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.670505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.670666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.670694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.670838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.670871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.671020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.671054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.671221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.671256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.671408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.671470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.671646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.671713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.671959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.672015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.672192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.672226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.672464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.672499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.672641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.173 [2024-11-25 13:28:05.672676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.173 qpair failed and we were unable to recover it. 00:29:08.173 [2024-11-25 13:28:05.672856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.672913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.673014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.673050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.673176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.673217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.673382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.673416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.673539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.673574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.673786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.673854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.673992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.674027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.674171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.674216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.674359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.674394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.674567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.674610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.674757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.674792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.674971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.675011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.675154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.675189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.675338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.675374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.675523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.675557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.675697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.675731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.675864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.675900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.676049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.676085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.676223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.676257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.676453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.676489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.676618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.676655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.676807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.676849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.676964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.676998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.677139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.677175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.677344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.677380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.677525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.677559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.677696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.677732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.677847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.677881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.678021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.678055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.678215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.678250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.678402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.678438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.678568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.678603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.678723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.678760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.678943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.678977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.679077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.679111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.679250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.679284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.679473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.679507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.174 qpair failed and we were unable to recover it. 00:29:08.174 [2024-11-25 13:28:05.679651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.174 [2024-11-25 13:28:05.679687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.679828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.679863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.679995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.680029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.680142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.680176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.680285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.680329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.680449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.680488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.680605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.680639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.680776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.680811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.680923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.680959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.681103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.681138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.681251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.681284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.681433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.681468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.681627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.681672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.681842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.681876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.681989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.682024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.682136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.682172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.682299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.682343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.682495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.682529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.682698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.682733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.682894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.682928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.683072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.683107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.683252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.683287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.683439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.683499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.683709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.683767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.683953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.683988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.684107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.684141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.684285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.684329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.684475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.684510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.684702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.684769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.684914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.684949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.685067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.685103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.685225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.685261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.685402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.685438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.685606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.685640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.685759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.685795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.685966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.686001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.686138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.686172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.686284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.175 [2024-11-25 13:28:05.686343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.175 qpair failed and we were unable to recover it. 00:29:08.175 [2024-11-25 13:28:05.686489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.686525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.686682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.686838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.686874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.687046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.687091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.687237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.687271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.687461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.687496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.687640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.687674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.687876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.687917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.688053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.688088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.688205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.688240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.688409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.688445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.688606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.688641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.688756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.688789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.688913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.688947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.689060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.689094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.689228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.689262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.689430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.689466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.689570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.689605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.689754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.689789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.689972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.690017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.690129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.690163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.690313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.690348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.690494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.690529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.690708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.690743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.690861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.690896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.691039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.691084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.691259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.691293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.691451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.691485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.691619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.691653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.691795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.691830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.691977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.692012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.692186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.692220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.692396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.692459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.692678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.692749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.692858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.692893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.693044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.693083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.693222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.693256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.176 qpair failed and we were unable to recover it. 00:29:08.176 [2024-11-25 13:28:05.693389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.176 [2024-11-25 13:28:05.693424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.693532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.693569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.693722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.693757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.693902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.693936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.694108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.694142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.694246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.694280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.694424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.694459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.694601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.694646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.694787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.694821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.694963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.694997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.695096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.695136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.695283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.695326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.695498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.695532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.695685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.695719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.695895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.695930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.696078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.696112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.696220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.696253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.696458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.696532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.696777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.696834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.696986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.697020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.697135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.697171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.697378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.697446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.697570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.697604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.697798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.697843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.697989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.698024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.698157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.698191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.698312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.698347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.698484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.698518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.698686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.698720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.698837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.698873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.699021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.699055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.699163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.699197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.699341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.699376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.699523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.699558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.699717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.177 [2024-11-25 13:28:05.699751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.177 qpair failed and we were unable to recover it. 00:29:08.177 [2024-11-25 13:28:05.699896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.699931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.700071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.700106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.700227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.700262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.700412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.700448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.700591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.700626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.700796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.700837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.700977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.701011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.701117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.701154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.701325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.701360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.701530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.701598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.701726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.701760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.701904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.701938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.702071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.702106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.702278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.702334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.702515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.702583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.702782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.702846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.702971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.703012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.703159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.703193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.703372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.703408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.703549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.703584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.703763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.703797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.703944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.703990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.704098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.704143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.704296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.704337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.704475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.704510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.704688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.704722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.704866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.704900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.705053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.705089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.705219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.705253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.705503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.705558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.705717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.705773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.705907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.705941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.706093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.706128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.706274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.706317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.706494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.706529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.706680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.706714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.706862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.706897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.178 [2024-11-25 13:28:05.707034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.178 [2024-11-25 13:28:05.707069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.178 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.707172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.707205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.707382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.707417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.707529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.707563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.707698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.707732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.707889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.707924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.708071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.708105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.708245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.708279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.708437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.708471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.708616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.708650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.708767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.708802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.708914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.708947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.709123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.709157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.709296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.709337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.709479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.709514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.709616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.709661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.709812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.709855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.709963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.709997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.710174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.710214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.710339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.710376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.710484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.710519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.710651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.710686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.710822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.710866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.711043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.711077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.711195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.711230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.711407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.711443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.711648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.711691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.711858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.711903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.712051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.712085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.712186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.712220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.712412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.712473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.712717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.712774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.712895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.712929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.713077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.713111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.713234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.713268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.713521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.713578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.713836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.713894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.714065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.714099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.179 qpair failed and we were unable to recover it. 00:29:08.179 [2024-11-25 13:28:05.714236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.179 [2024-11-25 13:28:05.714270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.714508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.714566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.714784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.714839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.715040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.715096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.715238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.715274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.715490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.715556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.715776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.715843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.716041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.716097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.716270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.716316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.716521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.716586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.716734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.716798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.717043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.717100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.717247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.717281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.717450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.717511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.717720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.717777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.717921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.717976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.718139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.718173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.718322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.718359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.718580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.718653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.718838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.718897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.719049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.719089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.719239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.719273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.719500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.719551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.719792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.719852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.720022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.720082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.720226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.720262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.720449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.720513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.720702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.720761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.720992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.721050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.721196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.721232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.721406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.721464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.721622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.721668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.721814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.721848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.722057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.722091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.722243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.722278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.722452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.722487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.722640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.722674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.722851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.180 [2024-11-25 13:28:05.722886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.180 qpair failed and we were unable to recover it. 00:29:08.180 [2024-11-25 13:28:05.723023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.723057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.723216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.723250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.723433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.723467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.723620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.723655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.723773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.723807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.723953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.723988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.724131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.724165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.724281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.724335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.724445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.724479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.724783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.724890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.725203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.725281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.725578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.725646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.725943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.726015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.726332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.726399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.726550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.726608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.726897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.726976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.727203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.727269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.727528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.727563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.727743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.727811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.728043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.728110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.728376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.728413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.728586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.728622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.728859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.728940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.729204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.729276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.729479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.729514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.729650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.729685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.729899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.729966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.730229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.730264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.730442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.730478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.730579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.730623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.730855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.730927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.731148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.731205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.731350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.731387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.181 [2024-11-25 13:28:05.731571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.181 [2024-11-25 13:28:05.731631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.181 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.731849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.731912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.732059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.732122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.732241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.732278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.732479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.732549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.732838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.732906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.733151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.733218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.733460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.733497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.733677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.733747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.734044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.734110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.734368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.734405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.734525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.734562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.734874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.734950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.735225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.735318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.735524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.735560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.735744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.735824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.736101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.736172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.736433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.736469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.736650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.736685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.736935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.736970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.737244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.737323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.737504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.737539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.737753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.737820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.738111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.738167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.738375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.738413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.738568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.738646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.738899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.738967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.739230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.739276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.739464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.739500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.739661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.739740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.740053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.740120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.740395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.740431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.740577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.740658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.740926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.740992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.741293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.741382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.741535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.741570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.741762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.741818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.742113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.182 [2024-11-25 13:28:05.742190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.182 qpair failed and we were unable to recover it. 00:29:08.182 [2024-11-25 13:28:05.742421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.742459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.742593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.742629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.742763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.742804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.743048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.743118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.743330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.743366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.743512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.743547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.743768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.743835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.744096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.744165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.744449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.744485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.744691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.744757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.745063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.745130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.745381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.745421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.745623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.745691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.745996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.746064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.746370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.746405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.746539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.746574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.746728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.746764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.746993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.747061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.747377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.747413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.747599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.747666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.747962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.748036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.748322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.748390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.748685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.748756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.749019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.749086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.749376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.749444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.749717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.749795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.750083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.750161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.750405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.750472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.750768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.750844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.751069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.751142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.751429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.751507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.751804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.751894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.752207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.752282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.752552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.752624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.752890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.752961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.753262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.753350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.753651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.753721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.183 [2024-11-25 13:28:05.753935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.183 [2024-11-25 13:28:05.754002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.183 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.754254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.754347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.754563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.754642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.754942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.755010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.755266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.755368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.755664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.755740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.756009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.756075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.756363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.756464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.756790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.756859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.757116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.757182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.757439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.757508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.757805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.757872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.758073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.758143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.758396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.758466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.758768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.758835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.759085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.759155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.759404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.759474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.759784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.759850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.760133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.760199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.760503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.760571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.760873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.760947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.761245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.761335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.761638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.761716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.761998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.762064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.762325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.762400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.762599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.762676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.762901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.762968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.763238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.763337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.763642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.763710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.764007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.764073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.764369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.764438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.764686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.764752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.765003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.765068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.765334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.765404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.765666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.765745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.766042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.766109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.766409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.766479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.184 [2024-11-25 13:28:05.766725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.184 [2024-11-25 13:28:05.766793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.184 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.767018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.767087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.767389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.767458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.767698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.767765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.767999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.768064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.768335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.768403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.768768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.769025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.769090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.769385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.769454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.769710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.769777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.770074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.770143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.770425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.770494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.770782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.770861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.771151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.771217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.771543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.771618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.771877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.771944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.772233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.772324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.772630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.772697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.772953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.773019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.773280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.773363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.773589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.773658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.773925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.773995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.774247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.774336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.774655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.774722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.774998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.775066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.775360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.775429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.775736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.775803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.776052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.776120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.776348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.776419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.776693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.776770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.777073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.777140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.777434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.777502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.777769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.777836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.778128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.778199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.778467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.778534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.778815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.778881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.779172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.779238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.779507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.779587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.185 [2024-11-25 13:28:05.779881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.185 [2024-11-25 13:28:05.779955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.185 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.780212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.780279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.780586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.780658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.780921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.780991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.781290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.781371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.781636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.781703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.781966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.782032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.782339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.782408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.782670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.782736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.782992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.783059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.783318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.783387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.783675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.783752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.784018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.784084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.784400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.784469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.784728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.784794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.785044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.785111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.785404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.785476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.785757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.785822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.786033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.786099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.786390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.786458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.786752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.786821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.787115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.787182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.787468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.787536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.787838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.787904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.788219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.788320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.788599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.788666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.788932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.788999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.789269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.789362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.789615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.789722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.789998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.790066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.790282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.790365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.790665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.790754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.791014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.791081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.791342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.791411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.186 [2024-11-25 13:28:05.791725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.186 [2024-11-25 13:28:05.791791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.186 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-25 13:28:05.792093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-25 13:28:05.792168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-25 13:28:05.792438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-25 13:28:05.792506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-25 13:28:05.792753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-25 13:28:05.792819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-25 13:28:05.793116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-25 13:28:05.793187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-25 13:28:05.793466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-25 13:28:05.793535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.187 qpair failed and we were unable to recover it. 00:29:08.187 [2024-11-25 13:28:05.793812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.187 [2024-11-25 13:28:05.793878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.794138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.794205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.794464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.794533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.794781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.794850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.795124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.795202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.795528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.795597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.795876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.795944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.796257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.796390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.796649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.796718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.796991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.797072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.797370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.797439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.797685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.797753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.798031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.798108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.798416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.798513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.798786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.798853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.799134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.799211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.799535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.799605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.799925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.799999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.800270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.800351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.800677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.800776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.801096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.801164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.801410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.801481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.801786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.801852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.802098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.802162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.802420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.802489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.802788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.802854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.803112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.803191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.461 qpair failed and we were unable to recover it. 00:29:08.461 [2024-11-25 13:28:05.803488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.461 [2024-11-25 13:28:05.803555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.803853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.803918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.804180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.804244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.804477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.804543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.804799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.804863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.805069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.805133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.805403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.805469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.805768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.805833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.806093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.806157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.806425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.806490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.806768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.806832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.807099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.807163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.807429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.807493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.807752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.807816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.808086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.808149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.808434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.808499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.808708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.808781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.809068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.809138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.809451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.809517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.809720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.809785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.810034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.810102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.810354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.810420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.810621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.810685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.810978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.811046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.811261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.811346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.811605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.811673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.811901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.811977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.812265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.812348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.812608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.812672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.812924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.812989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.813275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.813361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.813669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.813733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.814028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.814093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.814386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.814454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.814761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.814825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.815086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.815150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.815446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.815511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.815803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.815873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.816124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.462 [2024-11-25 13:28:05.816188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.462 qpair failed and we were unable to recover it. 00:29:08.462 [2024-11-25 13:28:05.816472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.816537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.816841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.816907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.817201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.817265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.817547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.817611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.817877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.817941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.818229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.818300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.818604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.818668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.818955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.819029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.819287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.819371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.819603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.819666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.819895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.819963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.820206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.820271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.820604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.820668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.820925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.820988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.821209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.821274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.821589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.821653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.821893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.821957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.822213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.822277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.822568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.822633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.822913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.822984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.823282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.823366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.823652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.823716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.824006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.824077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.824374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.824439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.824721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.824786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.825083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.825144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.825411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.825467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.825686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.825741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.825964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.826019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.826293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.826385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.826680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.826745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.826997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.827061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.827347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.827422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.827643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.827710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.827904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.827969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.828273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.828354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.828610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.828674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.828913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.463 [2024-11-25 13:28:05.828980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.463 qpair failed and we were unable to recover it. 00:29:08.463 [2024-11-25 13:28:05.829280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.829365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.829660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.829729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.829978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.830042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.830240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.830335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.830618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.830683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.830931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.830997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.831281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.831368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.831620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.831685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.831965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.832037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.832345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.832411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.832672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.832736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.833001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.833065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.833350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.833415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.833748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.833811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.834104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.834174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.834425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.834491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.834788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.834852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.835110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.835184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.835437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.835503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.835805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.835869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.836147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.836214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.836458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.836524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.836760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.836824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.837071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.837137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.837407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.837474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.837712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.837776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.838024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.838091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.838345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.838412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.838682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.838747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.838999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.839063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.839328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.839396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.839701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.839765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.840027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.840086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.840316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.840373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.840597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.840661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.840858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.840922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.841150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.464 [2024-11-25 13:28:05.841214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.464 qpair failed and we were unable to recover it. 00:29:08.464 [2024-11-25 13:28:05.841517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.841581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.841798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.841862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.842107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.842172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.842432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.842497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.842747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.842812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.843065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.843129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.843416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.843481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.843767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.843848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.844121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.844186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.844487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.844551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.844835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.844899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.845200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.845264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.845559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.845623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.845864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.845928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.846150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.846219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.846492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.846567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.846812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.846877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.847140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.847205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.847508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.847574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.847829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.847885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.848050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.848124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.848396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.848461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.848657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.848721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.849016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.849081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.849381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.849445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.849693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.849759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.850054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.850122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.850386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.850451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.850745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.850815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.851086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.851149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.465 qpair failed and we were unable to recover it. 00:29:08.465 [2024-11-25 13:28:05.851373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.465 [2024-11-25 13:28:05.851439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.851686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.851750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.851949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.852013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.852232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.852296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.852609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.852695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.852972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.853035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.853290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.853387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.853621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.853685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.853934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.854000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.854333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.854390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.854592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.854680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.854904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.854968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.855251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.855337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.855642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.855715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.855932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.855998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.466 [2024-11-25 13:28:05.856259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.466 [2024-11-25 13:28:05.856342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.466 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.856637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.856707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.857016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.857080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.857373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.857439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.857686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.857752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.858032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.858107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.858369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.858435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.858641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.858705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.858970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.859033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.859315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.859381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.859664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.859736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.859963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.860027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.860270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.860352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.860676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.860742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.861033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.861098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.861339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.861403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.861678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.861744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.862007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.862072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.862336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.862400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.862709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.862788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.863040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.863104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.863390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.863454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.863750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.863815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.864065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.864132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.864398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.864464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.864675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.864740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.865022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.865096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.865364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.865432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.865728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.865796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.866093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.866158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.866379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.866455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.866710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.866775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.867046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.867111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.867401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.867467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.867759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.867830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.868079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.868143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.868443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.868508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.868764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.868829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.467 [2024-11-25 13:28:05.869060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.467 [2024-11-25 13:28:05.869123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.467 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.869408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.869482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.869730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.869799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.870104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.870169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.870461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.870527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.870810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.870874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.871174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.871243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.871554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.871619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.871867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.871936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.872187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.872251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.872566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.872636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.872894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.872959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.873261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.873348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.873612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.873676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.873957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.874032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.874327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.874393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.874662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.874726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.874931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.874995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.875245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.875327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.875577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.875651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.875952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.876017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.876263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.876344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.876617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.876681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.876972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.877036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.877285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.877381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.877636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.877704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.877933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.877998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.878278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.878373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.878666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.878731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.878981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.879046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.879328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.879400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.879676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.879741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.879998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.880062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.880355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.880421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.880728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.880793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.881010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.881074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.881330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.881395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.881643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.881711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.881976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.468 [2024-11-25 13:28:05.882041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.468 qpair failed and we were unable to recover it. 00:29:08.468 [2024-11-25 13:28:05.882295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.882377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.882635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.882699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.882954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.883018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.883280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.883368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.883627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.883695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.883988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.884059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.884325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.884392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.884688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.884766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.885030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.885095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.885397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.885463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.885771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.885843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.886054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.886137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.886381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.886447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.886714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.886779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.887033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.887097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.887387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.887461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.887751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.887814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.888046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.888114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.888399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.888465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.888830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.889131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.889196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.889553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.889621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.889879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.889944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.890243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.890336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.890581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.890646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.890945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.891009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.891327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.891405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.891720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.891785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.892086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.892151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.892452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.892518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.892809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.892880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.893139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.893204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.893518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.893590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.893836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.893900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.894181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.894254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.894550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.894615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.894902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.894975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.895272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.895362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.469 [2024-11-25 13:28:05.895668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.469 [2024-11-25 13:28:05.895732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.469 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.896029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.896093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.896349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.896416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.896713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.896778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.896996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.897060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.897337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.897414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.897658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.897723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.897969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.898036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.898293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.898386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.898606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.898671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.898980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.899046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.899293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.899371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.899633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.899699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.899990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.900062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.900326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.900392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.900688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.900756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.901021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.901085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.901356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.901422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.901717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.901782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.902079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.902146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.902351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.902420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.902708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.902783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.903066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.903130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.903362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.903430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.903736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.903804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.904093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.904156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.904451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.904516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.904769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.904833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.905083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.905147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.905414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.905480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.905771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.905836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.906061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.906126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.906413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.906488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.906796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.906860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.907147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.907210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.907428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.907498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.907790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.907861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.908154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.908228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.908510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.470 [2024-11-25 13:28:05.908576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.470 qpair failed and we were unable to recover it. 00:29:08.470 [2024-11-25 13:28:05.908834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.908899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.909194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.909261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.909583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.909654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.909951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.910016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.910247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.910331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.910550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.910614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.910886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.910960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.911206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.911272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.911562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.911627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.911920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.911985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.912185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.912249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.912539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.912603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.912915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.912983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.913241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.913324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.913572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.913637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.913893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.913957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.914186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.914248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.914547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.914612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.914863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.914932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.915178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.915241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.915520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.915596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.915834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.915904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.916194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.916264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.916542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.916617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.916872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.916937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.917223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.917333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.917610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.917674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.917923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.917990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.918209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.918273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.918517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.918584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.918881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.918948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.919212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.919276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.919546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.919611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.919907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.919973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.920265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.471 [2024-11-25 13:28:05.920346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.471 qpair failed and we were unable to recover it. 00:29:08.471 [2024-11-25 13:28:05.920639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.920713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.921016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.921080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.921365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.921437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.921735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.921801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.922097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.922171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.922462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.922527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.922794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.922859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.923164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.923227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.923462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.923530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.923817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.923893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.924109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.924176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.924467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.924540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.924827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.924890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.925155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.925219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.925326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c7f30 (9): Bad file descriptor 00:29:08.472 [2024-11-25 13:28:05.925696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.925796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.926102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.926170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.926382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.926449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.926685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.926753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.927048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.927117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.927401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.927468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.927727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.927792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.928019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.928085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.928387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.928454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.928745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.928811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.929055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.929120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.929370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.929437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.929746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.929811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.930084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.930149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.930453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.930520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.930781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.930845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.931143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.931219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.931501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.931572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.931842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.931921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.932181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.932247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.932579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.932645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.932908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.932973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.933232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.933298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.472 [2024-11-25 13:28:05.933576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.472 [2024-11-25 13:28:05.933642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.472 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.933931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.934004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.934299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.934379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.934672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.934743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.934991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.935056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.935317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.935384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.935663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.935729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.935997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.936067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.936333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.936410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.936710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.936776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.937025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.937091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.937336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.937403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.937651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.937716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.938004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.938077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.938338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.938407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.938666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.938734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.939114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.939185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.939422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.939489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.939754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.939820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.940084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.940150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.940457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.940524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.940813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.940878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.941097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.941163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.941459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.941528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.941731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.941796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.942017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.942082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.942324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.942390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.942654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.942721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.942971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.943037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.943256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.943344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.943643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.943714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.944011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.944076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.944337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.944405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.944662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.944758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.945062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.945128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.945427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.945495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.945754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.945819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.946118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.473 [2024-11-25 13:28:05.946184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.473 qpair failed and we were unable to recover it. 00:29:08.473 [2024-11-25 13:28:05.946489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.946558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.946848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.946921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.947221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.947287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.947603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.947671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.947942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.948008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.948295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.948380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.948608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.948674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.948963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.949039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.949359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.949431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.949684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.949750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.950018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.950084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.950350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.950418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.950711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.950780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.951043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.951109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.951382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.951459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.951738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.951804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.952064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.952130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.952428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.952496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.952715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.952784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.952985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.953051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.953300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.953382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.953685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.953752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.954024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.954090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.954372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.954448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.954726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.954793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.955014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.955079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.955298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.955380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.955640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.955706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.955998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.956063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.956325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.956396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.956697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.956762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.957012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.957081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.957372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.957449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.957701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.957770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.957992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.958061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.958354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.958438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.958738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.474 [2024-11-25 13:28:05.958803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.474 qpair failed and we were unable to recover it. 00:29:08.474 [2024-11-25 13:28:05.959097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.959162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.959435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.959512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.959806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.959872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.960120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.960187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.960506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.960574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.960846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.960911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.961192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.961263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.961595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.961670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.961909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.961975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.962225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.962292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.962579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.962648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.962879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.962946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.963186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.963255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.963486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.963555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.963865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.963930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.964185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.964250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.964564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.964639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.964930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.964996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.965204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.965271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.965515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.965583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.965835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.965903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.966162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.966231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.966487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.966554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.966846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.966911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.967127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.967193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.967525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.967595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.967840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.967906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.968173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.968241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.968485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.968556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.968804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.968871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.969147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.969223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.969557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.969633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.969882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.969949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.970240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.475 [2024-11-25 13:28:05.970324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.475 qpair failed and we were unable to recover it. 00:29:08.475 [2024-11-25 13:28:05.970627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.970693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.970958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.971025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.971291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.971373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.971667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.971737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.971990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.972067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.972390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.972485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.972808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.972874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.973163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.973228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.973506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.973573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.973834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.973908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.974167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.974232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.974481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.974547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.974795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.974861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.975150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.975221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.975520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.975587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.975881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.975947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.976204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.976269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.976509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.976575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.976888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.976954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.977240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.977330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.977595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.977661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.977949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.978015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.978219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.978284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.978589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.978657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.978894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.978960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.979206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.979272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.979576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.979649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.979908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.979973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.980259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.980340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.980631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.980703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.980968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.981034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.981300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.981393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.981694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.981759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.982025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.982090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.982321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.982389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.982619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.982684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.982971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.983046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.983321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.476 [2024-11-25 13:28:05.983388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.476 qpair failed and we were unable to recover it. 00:29:08.476 [2024-11-25 13:28:05.983677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.983750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.984048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.984113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.984413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.984481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.984746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.984811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.985010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.985075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.985372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.985441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.985745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.985838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.986129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.986194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.986504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.986571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.986787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.986852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.987143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.987215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.987447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.987517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.987811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.987881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.988097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.988163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.988449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.988519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.988772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.988838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.989105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.989171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.989462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.989529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.989827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.989891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.990132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.990197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.990462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.990532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.990824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.990898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.991207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.991272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.991587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.991653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.991955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.992020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.992325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.992391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.992668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.992743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.993046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.993111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.993413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.993480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.993696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.993761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.994016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.994083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.994333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.994399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.994692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.994762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.995027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.995093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.995353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.995420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.995745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.995812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.996059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.477 [2024-11-25 13:28:05.996125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.477 qpair failed and we were unable to recover it. 00:29:08.477 [2024-11-25 13:28:05.996379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.996447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.996675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.996745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.997037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.997108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.997411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.997479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.997775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.997841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.998093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.998159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.998463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.998529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.998778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.998842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.999056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.999122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.999411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.999489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:05.999795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:05.999859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.000111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.000180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.000475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.000544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.000800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.000869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.001169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.001234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.001556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.001633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.001940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.002005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.002261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.002342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.002599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.002668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.002968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.003034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.003295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.003374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.003631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.003699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.003952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.004019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.004334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.004402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.004672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.004740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.005013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.005049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.005220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.005255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.005440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.005475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.005619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.005654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.005797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.005832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.005970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.006005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.006159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.006194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.006317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.006353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.006475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.006509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.006696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.006763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.007025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.007090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.007375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.007412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.007528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.478 [2024-11-25 13:28:06.007563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.478 qpair failed and we were unable to recover it. 00:29:08.478 [2024-11-25 13:28:06.007856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.007922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.008200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.008234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.008433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.008468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.008680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.008716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.008842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.008877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.009061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.009126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.009377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.009412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.009560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.009603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.009776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.009840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.010107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.010173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.010394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.010431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.010575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.010654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.010971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.011045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.011325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.011360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.011480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.011516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.011779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.011849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.012099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.012165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.012420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.012456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.012587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.012661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.012912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.012982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.013291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.013378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.013532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.013567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.013719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.013787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.014036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.014101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.014298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.014375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.014531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.014566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.014830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.014896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.015150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.015217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.015466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.015502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.015608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.015642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.015879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.015944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.016233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.016323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.016498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.016532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.016747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.016783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.016919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.016954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.017183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.017248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.017500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.017535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.479 [2024-11-25 13:28:06.017741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.479 [2024-11-25 13:28:06.017807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.479 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.018080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.018156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.018399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.018435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.018552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.018599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.018726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.018762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.018981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.019047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.019355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.019397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.019553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.019599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.019744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.019805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.020015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.020082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.020348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.020383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.020523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.020558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.020789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.020855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.021135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.021191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.021417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.021458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.021575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.021659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.021951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.022019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.022301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.022379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.022534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.022569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.022847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.022881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.023054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.023112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.023397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.023433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.023563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.023598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.023746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.023781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.024059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.024134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.024383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.024419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.024565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.024599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.024869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.024904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.025033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.025069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.025293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.025366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.025505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.025540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.025737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.025803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.026049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.026086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.026231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.026265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.480 [2024-11-25 13:28:06.026465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.480 [2024-11-25 13:28:06.026535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.480 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.026785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.026851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.027100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.027167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.027458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.027493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.027653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.027708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.027992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.028066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.028340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.028375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.028514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.028554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.028878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.028944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.029195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.029229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.029384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.029421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.029682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.029751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.030048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.030113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.030379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.030446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.030729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.030805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.031042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.031110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.031366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.031434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.031714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.031789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.032045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.032112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.032414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.032481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.032740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.032805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.033111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.033178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.033467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.033535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.033829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.033897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.034127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.034192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.034420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.034487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.034735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.034802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.035111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.035149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.035279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.035415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.035636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.035703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.035954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.036022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.036360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.036428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.036671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.036736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.037025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.037103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.037417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.037485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.037781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.037852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.038161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.038238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.038563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.481 [2024-11-25 13:28:06.038671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.481 qpair failed and we were unable to recover it. 00:29:08.481 [2024-11-25 13:28:06.038949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.039017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.039208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.039273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.039521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.039591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.039857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.039894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.040013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.040047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.040199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.040236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.040458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.040527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.040793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.040858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.041158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.041224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.041508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.041549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.041676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.041712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.041926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.041991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.042286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.042336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.042449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.042486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.042737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.042772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.042940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.043004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.043248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.043328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.043546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.043611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.043838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.043873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.044014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.044049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.044166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.044203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.044477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.044545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.044836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.044908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.045248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.045375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.045412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.045652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.045686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.045857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.045907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.046203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.046271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.046514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.046580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.046874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.046940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.047203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.047268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.047572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.047638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.047888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.047953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.048175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.048240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.048554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.048875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.048943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.049210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.049278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.482 qpair failed and we were unable to recover it. 00:29:08.482 [2024-11-25 13:28:06.049529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.482 [2024-11-25 13:28:06.049597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.049815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.049883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.050176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.050247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.050525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.050591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.050858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.050924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.051210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.051276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.051542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.051608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.051891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.051965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.052267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.052358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.052627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.052661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.052761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.052796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.053021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.053056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.053226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.053333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.053606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.053672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.053949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.053983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.054136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.054192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.054489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.054556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.054782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.054816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.054958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.054994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.055271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.055315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.055514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.055579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.055879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.055924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.056048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.056083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.056200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.056235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.056491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.056559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.056818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.056871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.057020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.057054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.057288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.057333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.057459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.057496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.057736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.057802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.058048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.058114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.058395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.058430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.058578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.058644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.058882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.058948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.059241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.059322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.059629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.059695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.059936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.060002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.060255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.483 [2024-11-25 13:28:06.060290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.483 qpair failed and we were unable to recover it. 00:29:08.483 [2024-11-25 13:28:06.060470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.060504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.060655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.060707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.060991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.061064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.061352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.061419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.061677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.061746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.062000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.062068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.062364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.062432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.062687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.062753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.063003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.063068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.063330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.063397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.063624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.063692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.063984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.064019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.064144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.064180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.064411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.064478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.064706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.064782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.065045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.065111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.065403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.065470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.065708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.065742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.065862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.065898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.066053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.066120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.066376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.066443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.066712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.066777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.067026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.067094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.067341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.067410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.067674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.067739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.067992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.068026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.068179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.068214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.068492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.068559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.068845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.068879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.069034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.069068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.069214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.069280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.069589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.069655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.069944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.484 [2024-11-25 13:28:06.069979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.484 qpair failed and we were unable to recover it. 00:29:08.484 [2024-11-25 13:28:06.070099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.070135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.070353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.070421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.070715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.070785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.071006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.071074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.071348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.071383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.071524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.071579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.071871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.071943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.072229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.072295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.072569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.072637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.072887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.072953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.073240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.073275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.073427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.073462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.073762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.073833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.074097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.074131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.074277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.074321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.074628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.074673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.074813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.074884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.075136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.075202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.075486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.075553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.075854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.075897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.076082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.076160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.076368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.076447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.076682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.076747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.077040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.077111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.077393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.077461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.077717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.077783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.077992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.078058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.078339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.078407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.078671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.078737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.079023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.079098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.079391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.079459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.079751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.079823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.080112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.080177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.080452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.080530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.080839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.080906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.081203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.081269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.081579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.081645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.081940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.485 [2024-11-25 13:28:06.082006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.485 qpair failed and we were unable to recover it. 00:29:08.485 [2024-11-25 13:28:06.082268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.082355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.082609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.082674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.082884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.082951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.083206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.083274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.083589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.083661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.083961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.084025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.084284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.084376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.084679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.084744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.085002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.085067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.085291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.085370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.085635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.085706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.085993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.086060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.086335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.086399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.086596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.086631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.086786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.086822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.086964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.086999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.087138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.087173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.087345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.087413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.087676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.087742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.087980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.088045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.088344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.088411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.088704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.088770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.089064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.089136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.089437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.089515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.089809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.089876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.090126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.090204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.090494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.090562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.090834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.090901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.091193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.091264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.091600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.091667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.091952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.092018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.092271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.092358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.092651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.092723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.092980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.093045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.093341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.093416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.093668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.093735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.094005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.094072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.486 [2024-11-25 13:28:06.094370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.486 [2024-11-25 13:28:06.094437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.486 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.094733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.094810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.095062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.095126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.095435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.095501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.095755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.095823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.096075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.096141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.096383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.096450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.096685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.096754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.097040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.097116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.097385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.097453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.097710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.097780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.098082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.098148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.098433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.098500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.098778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.098857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.099162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.099228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.099492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.099563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.099867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.099958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.100287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.100415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.100746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.100821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.101080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.101152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.101423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.101501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.101753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.101822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.102081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.102149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.102412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.102478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.102836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.102923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.103203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.103272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.103588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.103683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.103935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.104003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.104227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.104294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.104584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.104656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.487 [2024-11-25 13:28:06.104890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.487 [2024-11-25 13:28:06.104986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.487 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.105335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.105441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.105838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.105969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.106319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.106392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.106695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.106776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.107133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.107241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.107568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.107652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.107953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.108033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.108332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.108410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.108777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.108850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.109168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.109236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.109532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.109601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.109911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.109976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.110230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.110296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.110577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.110646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.110928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.111001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.111221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.111290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.111558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.111631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.111875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.111941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.112204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.112269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.112593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.112660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.112898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.112963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.113259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.113358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.113641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.113714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.113924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.113992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.114215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.114282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.114557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.114633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.114939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.115010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.115279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.115370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.115580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.115658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.115917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.115983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.116279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.116372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.761 [2024-11-25 13:28:06.116642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.761 [2024-11-25 13:28:06.116708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.761 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.116966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.117033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.117278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.117372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.117662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.117728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.117940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.118020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.118325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.118397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.118660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.118729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.118959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.119024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.119239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.119350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.119611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.119679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.119899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.119965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.120265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.120353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.120603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.120668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.120934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.120999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.121286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.121375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.121588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.121653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.121908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.121974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.122179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.122246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.122570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.122637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.122864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.122933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.123192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.123257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.123582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.123648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.123905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.123970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.124227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.124296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.124587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.124654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.124908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.124974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.125222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.125288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.125563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.125632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.125886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.125952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.126210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.126276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.126552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.126617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.126883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.126950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.127242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.127336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.127631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.127705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.128007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.128071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.128368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.128436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.128701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.128769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.128982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.762 [2024-11-25 13:28:06.129048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.762 qpair failed and we were unable to recover it. 00:29:08.762 [2024-11-25 13:28:06.129295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.129376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.129643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.129708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.129913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.129982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.130282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.130366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.130651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.130717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.131000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.131076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.131347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.131425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.131721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.131791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.132047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.132112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.132400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.132478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.132777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.132842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.133100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.133166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.133465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.133532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.133826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.133891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.134146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.134211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.134481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.134547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.134798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.134863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.135130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.135198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.135498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.135566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.135871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.135944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.136179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.136244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.136530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.136595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.136855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.136923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.137139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.137205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.137459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.137526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.137769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.137834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.138047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.138114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.138374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.138441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.138701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.138769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.139029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.139094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.139385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.139461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.139705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.139781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.140080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.140155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.140460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.140535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.140783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.140848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.141139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.141212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.141519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.141586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.141881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.763 [2024-11-25 13:28:06.141947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.763 qpair failed and we were unable to recover it. 00:29:08.763 [2024-11-25 13:28:06.142242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.142330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.142554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.142622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.142895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.142959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.143206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.143274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.143592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.143658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.143948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.144013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.144264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.144352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.144658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.144723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.144984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.145048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.145333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.145411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.145716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.145781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.146075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.146145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.146412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.146479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.146771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.146845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.147104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.147172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.147469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.147536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.147777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.147844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.148060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.148125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.148411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.148515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.148855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.148940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.149235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.149315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.149611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.149679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.149920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.149986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.150240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.150332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.150613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.150689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.150942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.151008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.151329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.151395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.151678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.151749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.152044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.152110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.152374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.152442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.152736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.152800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.153068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.153137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.153396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.153463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.153731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.153797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.154049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.154118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.154379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.154470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.154730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.154796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.764 [2024-11-25 13:28:06.155050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.764 [2024-11-25 13:28:06.155118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.764 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.155368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.155437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.155638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.155707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.155912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.155981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.156285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.156363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.156654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.156719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.157021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.157087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.157384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.157450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.157742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.157808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.158104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.158169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.158413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.158480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.158745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.158811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.159074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.159142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.159422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.159490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.159746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.159811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.160067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.160142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.160393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.160463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.160715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.160794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.161054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.161120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.161349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.161418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.161712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.161782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.162069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.162136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.162406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.162474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.162780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.162846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.163049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.163120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.163392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.163460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.163708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.163775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.164068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.164141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.164410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.164476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.164769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.164847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.165100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.165168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.165478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.165545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.165833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.165899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.166149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.765 [2024-11-25 13:28:06.166216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.765 qpair failed and we were unable to recover it. 00:29:08.765 [2024-11-25 13:28:06.166523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.166591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.166846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.166911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.167174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.167239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.167525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.167592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.167855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.167932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.168235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.168318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.168600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.168665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.168965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.169032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.169335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.169405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.169725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.169796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.170043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.170108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.170362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.170432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.170697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.170762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.171043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.171108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.171367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.171435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.171734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.171799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.172083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.172156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.172470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.172538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.172837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.172902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.173197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.173271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.173538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.173617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.173878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.173944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.174147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.174213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.174487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.174554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.174809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.174875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.175172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.175244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.175552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.175619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.175856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.175922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.176208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.176281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.176619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.176685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.176965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.177031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.177345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.177421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.177723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.177797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.178095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.178161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.178422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.178490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.178752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.178821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.179112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.179187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.179456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.766 [2024-11-25 13:28:06.179525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.766 qpair failed and we were unable to recover it. 00:29:08.766 [2024-11-25 13:28:06.179782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.179849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.180106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.180172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.180469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.180536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.180789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.180854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.181144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.181220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.181515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.181583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.181885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.181968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.182260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.182346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.182594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.182661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.182948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.183017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.183274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.183373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.183613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.183687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.183982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.184055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.184356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.184423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.184731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.184797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.185092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.185162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.185429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.185496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.185785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.185861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.186147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.186212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.186441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.186510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.186782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.186849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.187157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.187223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.187458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.187524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.187784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.187849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.188050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.188115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.188385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.188452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.188760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.188826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.189028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.189092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.189391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.189458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.189722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.189788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.190078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.190150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.190402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.190472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.190731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.190798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.191105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.191179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.191449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.191517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.191816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.191888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.192189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.192255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.192550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.767 [2024-11-25 13:28:06.192617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.767 qpair failed and we were unable to recover it. 00:29:08.767 [2024-11-25 13:28:06.192869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.192935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.193149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.193216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.193521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.193588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.193846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.193911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.194160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.194227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.194467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.194535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.194828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.194903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.195190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.195255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.195535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.195620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.195876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.195952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.196205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.196271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.196562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.196633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.196880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.196947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.197193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.197261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.197585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.197651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.197905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.197972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.198226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.198291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.198578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.198644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.198941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.199018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.199334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.199402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.199690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.199762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.200061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.200126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.200376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.200446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.200717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.200783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.201033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.201102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.201361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.201429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.201721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.201787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.202078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.202150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.202415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.202484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.202790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.202856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.203066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.203134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.203441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.203507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.203800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.203871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.204175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.204240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.204536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.204603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.204903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.204979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.205229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.205295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.205568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.768 [2024-11-25 13:28:06.205635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.768 qpair failed and we were unable to recover it. 00:29:08.768 [2024-11-25 13:28:06.205893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.205960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.206218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.206284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.206595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.206661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.206948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.207015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.207322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.207389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.207589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.207656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.207941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.208013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.208283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.208379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.208670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.208745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.209035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.209101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.209354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.209433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.209734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.209801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.210017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.210092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.210349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.210418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.210719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.210786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.211041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.211106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.211366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.211435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.211740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.211809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.212106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.212171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.212390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.212459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.212724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.212790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.213045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.213111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.213402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.213468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.213761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.213831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.214134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.214200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.214498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.214564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.214853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.214927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.215177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.215244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.215460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.215528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.215732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.215798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.216037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.216104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.216355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.216425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.216688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.216754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.217009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.217074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.217377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.217445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.217712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.217781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.218034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.218099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.218393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.769 [2024-11-25 13:28:06.218461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.769 qpair failed and we were unable to recover it. 00:29:08.769 [2024-11-25 13:28:06.218654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.218720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.218962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.219028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.219327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.219394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.219689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.219754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.219975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.220040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.220346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.220412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.220667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.220735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.220989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.221055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.221321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.221387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.221678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.221752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.222001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.222068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.222382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.222449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.222703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.222779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.223027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.223092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.223354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.223422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.223679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.223748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.224023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.224088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.224385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.224452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.224742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.224817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.225081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.225146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.225381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.225448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.225702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.225768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.225959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.226027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.226342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.226410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.226695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.226768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.226999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.227064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.227331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.227398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.227648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.227714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.227963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.228028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.228361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.228428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.770 [2024-11-25 13:28:06.228752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.770 [2024-11-25 13:28:06.228819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.770 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.229069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.229138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.229430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.229498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.229721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.229790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.230074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.230149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.230403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.230471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.230765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.230837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.231107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.231174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.231426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.231495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.231768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.231834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.232120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.232192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.232503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.232569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.232864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.232934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.233225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.233291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.233575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.233641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.233917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.233993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.234240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.234337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.234606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.234671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.234951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.235024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.235336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.235404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.235671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.235736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.236031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.236099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.236363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.236443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.236742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.236809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.237065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.237133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.237432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.237500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.237787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.237853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.238111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.238176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.238435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.238502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.238724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.238789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.239038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.239106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.239396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.239473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.239772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.239838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.240125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.240195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.240489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.240558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.240832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.240897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.241159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.241228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.241507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.241585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.241847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.771 [2024-11-25 13:28:06.241912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.771 qpair failed and we were unable to recover it. 00:29:08.771 [2024-11-25 13:28:06.242163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.242228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.242520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.242597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.242894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.242959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.243205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.243270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.243555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.243631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.243849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.243917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.244182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.244249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.244479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.244548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.244819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.244886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.245180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.245246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.245538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.245604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.245854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.245920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.246207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.246281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.246601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.246666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.246922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.246987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.247284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.247374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.247684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.247749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.248022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.248097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.248390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.248457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.248709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.248774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.248971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.249040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.249334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.249404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.249662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.249728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.249927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.250004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.250238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.250331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.250549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.250626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.250880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.250946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.251208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.251273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.251545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.251619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.251883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.251951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.252240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.252342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.252618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.252688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.252940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.253008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.253261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.253346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.253595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.253662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.253953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.254023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.254334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.254412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.254681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.772 [2024-11-25 13:28:06.254747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.772 qpair failed and we were unable to recover it. 00:29:08.772 [2024-11-25 13:28:06.255033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.255105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.255404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.255471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.255728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.255796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.256095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.256161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.256423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.256492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.256774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.256848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.257118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.257184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.257456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.257534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.257823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.257889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.258116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.258182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.258452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.258530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.258783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.258849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.259085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.259152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.259400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.259467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.259726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.259794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.260041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.260108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.260345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.260415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.260719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.260785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.261074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.261138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.261436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.261506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.261753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.261820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.262113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.262179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.262477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.262544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.262847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.262913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.263173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.263238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.263488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.263568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.263795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.263861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.264111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.264177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.264440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.264509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.264806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.264873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.265135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.265201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.265437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.265505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.265756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.265821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.266036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.266103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.266369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.266435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.266670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.266735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.267027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.267098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.267360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.773 [2024-11-25 13:28:06.267426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.773 qpair failed and we were unable to recover it. 00:29:08.773 [2024-11-25 13:28:06.267697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.267763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.268069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.268136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.268429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.268498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.268792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.268858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.269079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.269145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.269347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.269413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.269716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.269781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.270066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.270132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.270426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.270493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.270717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.270784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.271028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.271095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.271395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.271463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.271770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.271837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.272138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.272203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.272540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.272618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.272919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.272985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.273238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.273320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.273612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.273682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.273999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.274064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.274262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.274350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.274640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.274711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.274973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.275040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.275290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.275371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.275617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.275683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.275936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.276002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.276271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.276386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.276690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.276766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.277060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.277138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.277398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.277465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.277720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.277786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.278088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.278152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.774 [2024-11-25 13:28:06.278435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.774 [2024-11-25 13:28:06.278501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.774 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.278790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.278865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.279172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.279248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.279569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.279639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.279896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.279962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.280254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.280337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.280635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.280711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.280965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.281031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.281281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.281365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.281624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.281689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.281989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.282064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.282387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.282464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.282734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.282799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.283048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.283113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.283415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.283483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.283786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.283851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.284154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.284220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.284538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.284609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.284898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.284974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.285186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.285252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.285529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.285598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.285851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.285918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.286128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.286196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.286469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.286549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.286835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.286901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.287207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.287273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.287527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.287593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.287870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.287946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.288257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.288353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.288616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.288684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.288986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.289052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.289346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.289415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.289663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.289729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.289988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.290053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.290317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.290385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.290611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.290676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.290894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.290973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.291171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.291240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.775 [2024-11-25 13:28:06.291542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.775 [2024-11-25 13:28:06.291612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.775 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.291908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.291974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.292261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.292358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.292623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.292689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.292900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.292968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.293256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.293343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.293632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.293697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.293957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.294022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.294325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.294393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.294646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.294711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.295000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.295073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.295331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.295398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.295711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.295777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.296073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.296138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.296441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.296511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.296734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.296800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.297060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.297126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.297417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.297484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.297734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.297802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.298085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.298156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.298456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.298522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.298774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.298843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.299140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.299210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.299487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.299554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.299855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.299932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.300240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.300335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.300633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.300698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.300981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.301049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.301358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.301427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.301711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.301784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.302041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.302109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.302340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.302406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.302656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.302722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.303015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.303082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.303345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.303412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.303669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.303734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.303998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.304064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.304361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.304428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.304647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.776 [2024-11-25 13:28:06.304728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.776 qpair failed and we were unable to recover it. 00:29:08.776 [2024-11-25 13:28:06.304991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.305058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.305282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.305369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.305664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.305733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.305987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.306053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.306335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.306409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.306704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.306770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.307017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.307084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.307376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.307445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.307708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.307774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.308027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.308092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.308341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.308409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.308662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.308730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.309027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.309094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.309404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.309483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.309728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.309794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.310086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.310156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.310429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.310497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.310786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.310860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.311150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.311216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.311520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.311587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.311884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.311960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.312211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.312280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.312553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.312621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.312881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.312946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.313246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.313324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.313538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.313609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.313879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.313946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.314234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.314300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.314622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.314687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.314938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.315008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.315300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.315390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.315679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.315744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.316018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.316095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.316357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.316426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.316733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.316799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.317045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.317112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.317408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.317475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.777 [2024-11-25 13:28:06.317759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.777 [2024-11-25 13:28:06.317825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.777 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.318118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.318191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.318467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.318533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.318846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.318912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.319165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.319231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.319498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.319564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.319847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.319912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.320182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.320248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.320528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.320605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.320902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.320975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.321244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.321344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.321653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.321721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.322024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.322089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.322386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.322454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.322755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.322824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.323124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.323189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.323467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.323535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.323838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.323903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.324202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.324268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.324587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.324653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.324970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.325237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.325320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.325584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.325651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.325866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.325934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.326230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.326328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.326637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.326703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.326961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.327027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.327335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.327402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.327657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.327725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.327986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.328076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.328375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.328444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.328653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.328718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.328978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.329044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.329353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.329419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.329692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.329758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.329976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.330040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.330293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.330376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.330680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.778 [2024-11-25 13:28:06.330746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.778 qpair failed and we were unable to recover it. 00:29:08.778 [2024-11-25 13:28:06.331043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.331107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.331343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.331409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.331666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.331732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.331989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.332054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.332268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.332346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.332606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.332686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.332977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.333053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.333317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.333385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.333680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.333748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.333960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.334027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.334285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.334363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.334660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.334726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.334985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.335051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.335316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.335383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.335658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.335723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.336016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.336083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.336376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.336443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.336732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.336798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.337101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.337171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.337467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.337535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.337831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.337907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.338210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.338276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.338555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.338621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.338870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.338938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.339200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.339266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.339539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.339605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.339856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.339921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.340208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.340282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.340585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.340652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.340895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.340961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.341225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.341290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.341602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.341689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.341978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.342048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.779 qpair failed and we were unable to recover it. 00:29:08.779 [2024-11-25 13:28:06.342324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.779 [2024-11-25 13:28:06.342391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.342680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.342757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.343001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.343066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.343350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.343422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.343645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.343711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.343904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.343973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.344210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.344276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.344555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.344623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.344873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.344942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.345232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.345338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.345590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.345662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.345882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.345946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.346207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.346273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.346588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.346654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.346919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.346984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.347232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.347297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.347618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.347689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.347939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.348004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.348252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.348343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.348605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.348681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.348976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.349045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.349364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.349439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.349709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.349776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.350075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.350140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.350430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.350497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.350761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.350827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.351124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.351191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.351424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.351491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.351740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.351806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.352024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.352092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.352321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.352389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.352658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.352734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.353025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.353091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.353344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.353411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.353670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.353736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.354026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.354097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.354371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.354439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.354733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.354802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.780 qpair failed and we were unable to recover it. 00:29:08.780 [2024-11-25 13:28:06.355038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.780 [2024-11-25 13:28:06.355116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.355380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.355451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.355732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.355797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.356055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.356122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.356375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.356444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.356736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.356806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.357097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.357163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.357460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.357528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.357825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.357890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.358101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.358167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.358415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.358482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.358702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.358768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.358994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.359060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.359322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.359392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.359663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.359729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.360026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.360093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.360351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.360418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.360709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.360779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.361078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.361144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.361428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.361496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.361699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.361768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.362015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.362080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.362398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.362470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.362726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.362794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.363056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.363124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.363424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.363492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.363787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.363852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.364146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.364212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.364524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.364591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.364815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.364880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.365125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.365191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.365508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.365579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.365866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.365943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.366193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.366260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.366579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.366645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.366926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.366992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.367282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.367374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.367633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.367699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.781 [2024-11-25 13:28:06.368001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.781 [2024-11-25 13:28:06.368066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.781 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.368355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.368422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.368675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.368752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.369017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.369082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.369366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.369432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.369739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.369804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.370044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.370111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.370320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.370387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.370682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.370749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.370968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.371034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.371333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.371401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.371627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.371692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.371984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.372053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.372354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.372421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.372714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.372779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.373025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.373091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.373349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.373417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.373703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.373778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.374061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.374127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.374423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.374492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.374757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.374822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.375080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.375146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.375395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.375464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.375756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.375826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.376084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.376150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.376390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.376456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.376683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.376752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.377040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.377114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.377408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.377474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.377776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.377852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.378104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.378171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.378408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.378475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.378773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.378838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.379130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.379196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.379442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.379509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.379728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.379794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.782 [2024-11-25 13:28:06.380052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.782 [2024-11-25 13:28:06.380121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.782 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.380354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.380438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.380694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.380762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.381027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.381094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.381325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.381394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.381657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.381726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.382024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.382107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.382319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.382386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.382627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.382693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.382986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.383057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.383320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.383387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.383627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.383694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.383981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.384055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.384357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.384430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.384748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.384814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.385077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.385142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.385403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.385471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.385776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.385842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.386122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.386188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.386451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.386521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.386832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.386898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.387160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.387229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.387462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.387530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.387800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.387876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.388122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.388188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.388463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.388543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.388840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.388905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.389159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.389224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.389481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.389548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.389834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.389904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.390149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.390216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.390496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.390574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.390837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.390902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.391205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.391273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.391548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.391617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.391917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.391982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.392277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.392375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.392691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.392757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.783 qpair failed and we were unable to recover it. 00:29:08.783 [2024-11-25 13:28:06.392980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.783 [2024-11-25 13:28:06.393049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.393332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.393406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.393710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.393775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.393994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.394061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.394325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.394391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.394689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.394754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.395011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.395078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.395380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.395447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.395684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.395760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.396019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.396084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.396370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.396445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.396737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.396803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.397022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.397088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.397323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.397389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.397673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.397744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.398053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.398119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.398373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.398439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.398651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.398728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.398967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.399033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.399292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.399371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.399655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.399721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.399967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.400033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.400345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.400423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.400683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.400749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.401076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.401177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.401533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.401606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.401869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.401935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.402225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.402296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.402608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.402675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.402970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.403036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.403325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.403419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.403740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.403825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.404102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.404168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.784 qpair failed and we were unable to recover it. 00:29:08.784 [2024-11-25 13:28:06.404381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.784 [2024-11-25 13:28:06.404452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.785 qpair failed and we were unable to recover it. 00:29:08.785 [2024-11-25 13:28:06.404748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.785 [2024-11-25 13:28:06.404816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.785 qpair failed and we were unable to recover it. 00:29:08.785 [2024-11-25 13:28:06.405120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.785 [2024-11-25 13:28:06.405198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.785 qpair failed and we were unable to recover it. 00:29:08.785 [2024-11-25 13:28:06.405445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.785 [2024-11-25 13:28:06.405513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.785 qpair failed and we were unable to recover it. 00:29:08.785 [2024-11-25 13:28:06.405740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.785 [2024-11-25 13:28:06.405831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:08.785 qpair failed and we were unable to recover it. 00:29:09.056 [2024-11-25 13:28:06.406202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.056 [2024-11-25 13:28:06.406330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.056 qpair failed and we were unable to recover it. 00:29:09.056 [2024-11-25 13:28:06.406712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.056 [2024-11-25 13:28:06.406828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.056 qpair failed and we were unable to recover it. 00:29:09.056 [2024-11-25 13:28:06.407127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.056 [2024-11-25 13:28:06.407196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.056 qpair failed and we were unable to recover it. 00:29:09.056 [2024-11-25 13:28:06.407498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.056 [2024-11-25 13:28:06.407575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.056 qpair failed and we were unable to recover it. 00:29:09.056 [2024-11-25 13:28:06.407856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.056 [2024-11-25 13:28:06.407942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.056 qpair failed and we were unable to recover it. 00:29:09.056 [2024-11-25 13:28:06.408296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.056 [2024-11-25 13:28:06.408397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.056 qpair failed and we were unable to recover it. 00:29:09.056 [2024-11-25 13:28:06.408664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.056 [2024-11-25 13:28:06.408738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.056 qpair failed and we were unable to recover it. 00:29:09.056 [2024-11-25 13:28:06.408985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.056 [2024-11-25 13:28:06.409054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.056 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.409396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.409490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.409778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.409847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.410113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.410209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.410518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.410597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.410814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.410881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.411141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.411206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.411476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.411545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.411838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.411910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.412209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.412274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.412499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.412568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.412853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.412929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.413231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.413295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.413586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.413652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.413901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.413965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.414194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.414259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.414515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.414582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.414896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.414960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.415221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.415286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.415558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.415624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.415912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.415985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.416271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.416355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.416625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.416691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.416948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.417012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.417292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.417381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.417673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.417738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.417946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.418014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.418248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.418330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.418574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.418639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.418840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.418906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.419143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.419209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.419514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.419579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.419798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.419864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.420126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.420191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.420460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.420533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.420793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.420861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.421151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.421226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.057 qpair failed and we were unable to recover it. 00:29:09.057 [2024-11-25 13:28:06.421518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.057 [2024-11-25 13:28:06.421585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.421839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.421904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.422110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.422180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.422436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.422503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.422769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.422834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.423091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.423157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.423349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.423427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.423719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.423789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.424059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.424126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.424420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.424490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.424756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.424821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.425116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.425185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.425426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.425495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.425743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.425809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.426077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.426142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.426413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.426481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.426779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.426844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.427106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.427175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.427478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.427545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.427790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.427855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.428163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.428228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.428502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.428569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.428854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.428919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.429112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.429177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.429439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.429506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.429800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.429873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.430163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.430228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.430536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.430603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.430853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.430922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.431219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.431288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.431575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.431643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.431867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.431932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.432149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.432215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.432472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.432539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.432826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.432892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.433138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.433204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.433513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.433580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.433869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.433935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.058 qpair failed and we were unable to recover it. 00:29:09.058 [2024-11-25 13:28:06.434182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.058 [2024-11-25 13:28:06.434248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.434524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.434589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.434840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.434907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.435159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.435228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.435530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.435597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.435824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.435889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.436151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.436218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.436509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.436576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.436828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.436904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.437190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.437264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.437546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.437612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.437864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.437929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.438219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.438289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.438606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.438682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.438984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.439050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.439260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.439345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.439581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.439647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.439860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.439928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.440217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.440289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.440609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.440674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.440947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.441013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.441340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.441406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.441673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.441739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.442033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.442102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.442408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.442474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.442735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.442800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.443071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.443148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.443436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.443503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.443762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.443827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.444088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.444153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.444420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.444487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.444754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.444822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.445047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.445115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.445350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.445418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.445676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.445781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.446128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.446215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.446462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.446530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.059 qpair failed and we were unable to recover it. 00:29:09.059 [2024-11-25 13:28:06.446829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.059 [2024-11-25 13:28:06.446895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.447117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.447185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.447410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.447482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.447796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.447864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.448167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.448236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.448540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.448607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.448919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.448986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.449251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.449341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.449638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.449710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.449975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.450041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.450272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.450353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.450616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.450710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.451014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.451080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.451373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.451440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.451698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.451764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.452021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.452087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.452316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.452385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.452681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.452746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.453041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.453106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.453378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.453456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.453744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.453809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.454030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.454096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.454334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.454401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.454691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.454763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.455026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.455091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.455403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.455470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.455757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.455821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.456043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.456107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.456392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.456464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.456716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.456781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.457034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.457103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.457376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.457444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.457728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.457801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.458058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.458126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.458421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.458489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.458784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.458850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.459104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.459169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.459464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.060 [2024-11-25 13:28:06.459531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.060 qpair failed and we were unable to recover it. 00:29:09.060 [2024-11-25 13:28:06.459830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.459898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.460190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.460260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.460574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.460640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.460948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.461015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.461328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.461403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.461701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.461767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.462069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.462134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.462428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.462500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.462800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.462866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.463121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.463187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.463493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.463559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.463855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.463921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.464165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.464230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.464503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.464581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.464878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.464953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.465215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.465280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.465567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.465637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.465905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.465971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.466263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.466348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.466613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.466680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.466977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.467042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.467332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.467399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.467657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.467723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.468027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.468092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.468343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.468410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.468660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.468724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.468942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.469010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.469275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.469355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.469649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.469714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.470017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.470082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.470378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.470445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.470700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.470765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.471056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.061 [2024-11-25 13:28:06.471125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.061 qpair failed and we were unable to recover it. 00:29:09.061 [2024-11-25 13:28:06.471379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.471449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.471684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.471749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.472003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.472068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.472370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.472438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.472700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.472764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.472959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.473025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.473288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.473388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.473698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.473775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.474075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.474140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.474441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.474508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.474794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.474860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.475063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.475130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.475418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.475493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.475758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.475824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.476081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.476148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.476369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.476440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.476737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.476804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.477097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.477161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.477462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.477528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.477777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.477846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.478107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.478183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.478442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.478509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.478723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.478790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.479005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.479072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.479374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.479440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.479744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.479819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.480015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.480081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.480331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.480398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.480700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.480765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.481030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.481098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.481360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.481428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.481721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.481796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.482089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.482155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.482448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.482514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.482736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.482804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.483055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.483120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.483452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.483519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.062 [2024-11-25 13:28:06.483816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.062 [2024-11-25 13:28:06.483886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.062 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.484131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.484196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.484532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.484599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.484891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.484956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.485218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.485286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.485599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.485666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.485929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.485994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.486235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.486316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.486612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.486685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.486929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.486995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.487298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.487382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.487595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.487664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.487951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.488024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.488290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.488368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.488577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.488642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.488859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.488926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.489217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.489288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.489588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.489654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.489952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.490018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.490272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.490359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.490625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.490691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.490945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.491014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.491257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.491351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.491652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.491725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.492007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.492072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.492381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.492448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.492743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.492809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.493100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.493165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.493372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.493439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.493693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.493761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.493979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.494045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.494301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.494391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.494689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.494755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.495029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.495104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.495294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.495374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.495632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.495698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.495997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.496061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.496372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.496439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.063 qpair failed and we were unable to recover it. 00:29:09.063 [2024-11-25 13:28:06.496692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.063 [2024-11-25 13:28:06.496761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.497014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.497080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.497379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.497446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.497735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.497801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.498093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.498157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.498433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.498500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.498728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.498794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.499043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.499110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.499364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.499431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.499693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.499758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.500002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.500071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.500322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.500389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.500611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.500689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.500983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.501055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.501354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.501421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.501669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.501735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.501953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.502018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.502288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.502379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.502671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.502737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.502978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.503043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.503330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.503405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.503692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.503759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.503985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.504052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.504322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.504390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.504653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.504720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.505012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.505084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.505400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.505476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.505707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.505777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.506021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.506085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.506325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.506393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.506639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.506708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.506971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.507040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.507338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.507404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.507670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.507737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.508025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.508091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.508379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.508452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.508748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.508813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.064 qpair failed and we were unable to recover it. 00:29:09.064 [2024-11-25 13:28:06.509041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.064 [2024-11-25 13:28:06.509109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.509367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.509434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.509687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.509752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.510004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.510069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.510330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.510397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.510597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.510662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.510943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.511018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.511275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.511356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.511589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.511654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.511921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.511987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.512289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.512368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.512622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.512687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.512988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.513054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.513349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.513415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.513681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.513749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.513999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.514076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.514374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.514441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.514735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.514800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.515097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.515164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.515389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.515457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.515721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.515785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.516051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.516120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.516415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.516490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.516778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.516843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.517088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.517153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.517450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.517518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.517763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.517828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.518099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.518165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.518457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.518525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.518802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.518869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.519081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.519147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.519413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.519480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.519777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.519850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.520137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.520203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.520512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.520588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.065 qpair failed and we were unable to recover it. 00:29:09.065 [2024-11-25 13:28:06.520874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.065 [2024-11-25 13:28:06.520947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.521220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.521286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.521624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.521690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.521936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.522001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.522292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.522375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.522668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.522733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.523016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.523081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.523383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.523457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.523770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.523846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.524116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.524181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.524418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.524486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.524758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.524833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.525091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.525157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.525417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.525485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.525689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.525758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.526023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.526088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.526379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.526449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.526662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.526731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.527025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.527093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.527384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.527450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.527749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.527826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.528119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.528185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.528491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.528557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.528848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.528923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.529190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.529256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.529536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.529601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.529885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.529951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.530180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.530247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.530565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.530639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.530887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.530955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.531247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.531331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.531629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.531704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.531958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.532023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.532318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.532390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.532674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.532740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.533002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.533067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.066 [2024-11-25 13:28:06.533361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.066 [2024-11-25 13:28:06.533429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.066 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.533690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.533758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.534004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.534071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.534297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.534378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.534639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.534704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.534961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.535027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.535297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.535385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.535682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.535747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.536045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.536110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.536368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.536435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.536722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.536791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.537050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.537116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.537361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.537428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.537676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.537741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.538033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.538099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.538351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.538425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.538717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.538781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.539025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.539093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.539349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.539417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.539656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.539723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.539967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.540033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.540285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.540369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.540662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.540728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.540989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.541055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.541359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.541437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.541731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.541798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.542078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.542143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.542466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.542532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.542741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.542809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.543098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.543165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.543462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.543528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.543819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.543885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.544189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.544255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.544495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.544561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.544823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.544889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.545183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.545249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.545470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.545538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.545805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.067 [2024-11-25 13:28:06.545871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.067 qpair failed and we were unable to recover it. 00:29:09.067 [2024-11-25 13:28:06.546146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.546213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.546481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.546551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.546808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.546874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.547122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.547190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.547489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.547557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.547820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.547886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.548131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.548197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.548516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.548583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.548841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.548906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.549156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.549225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.549506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.549573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.549869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.549935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.550197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.550263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.550620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.550686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.550931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.550996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.551297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.551379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.551642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.551708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.551910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.551978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.552271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.552351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.552599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.552667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.552934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.553000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.553313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.553388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.553620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.553686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.553980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.554044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.554289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.554374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.554621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.554689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.554955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.555034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.555341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.555408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.555705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.555771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.556080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.556145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.556400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.556469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.556761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.556827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.557085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.557153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.557461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.557528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.557821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.557887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.558103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.558169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.558436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.068 [2024-11-25 13:28:06.558503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.068 qpair failed and we were unable to recover it. 00:29:09.068 [2024-11-25 13:28:06.558803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.558869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.559150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.559216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.559530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.559597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.559871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.559938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.560226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.560291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.560608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.560675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.560926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.560997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.561319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.561386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.561644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.561711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.561964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.562032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.562236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.562318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.562580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.562647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.562907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.562975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.563239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.563318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.563621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.563687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.563988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.564053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.564328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.564406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.564660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.564729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.564989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.565055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.565292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.565389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.565659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.565724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.566014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.566079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.566341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.566410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.566701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.566767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.567068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.567134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.567339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.567406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.567601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.567670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.567977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.568043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.568286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.568367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.568632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.568709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.568968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.569034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.569296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.569376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.569653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.569719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.570012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.570078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.570337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.570403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.570706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.570772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.571032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.571098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.069 [2024-11-25 13:28:06.571350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.069 [2024-11-25 13:28:06.571417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.069 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.571726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.571792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.572091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.572158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.572466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.572532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.572786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.572852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.573114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.573179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.573515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.573582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.573881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.573947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.574248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.574335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.574605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.574671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.574927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.574996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.575297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.575381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.575638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.575704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.575961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.576027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.576294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.576380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.576627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.576696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.576999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.577065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.577301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.577387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.577603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.577670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.577934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.578000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.578220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.578289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.578540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.578608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.578859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.578926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.579214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.579280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.579569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.579635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.579886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.579954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.580254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.580340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.580635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.070 [2024-11-25 13:28:06.580702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.070 qpair failed and we were unable to recover it. 00:29:09.070 [2024-11-25 13:28:06.581010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.581074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.581369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.581437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.581703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.581769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.582025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.582091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.582330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.582410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.582701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.582767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.583063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.583129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.583421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.583488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.583799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.583865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.584126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.584192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.584469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.584537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.584769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.584837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.585083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.585150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.585406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.585474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.585757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.585824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.586102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.586167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.586466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.586533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.586824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.586890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.587190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.587256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.587529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.587595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.587896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.587963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.588254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.588333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.588580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.588647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.588868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.588936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.589230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.589295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.589568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.589634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.589824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.589891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.590126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.590192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.590464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.590532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.071 qpair failed and we were unable to recover it. 00:29:09.071 [2024-11-25 13:28:06.590760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.071 [2024-11-25 13:28:06.590826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.591088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.591153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.591430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.591499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.591769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.591835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.592126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.592191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.592512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.592579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.592791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.592860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.593158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.593224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.593495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.593565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.593858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.593925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.594227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.594292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.594535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.594602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.594813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.594880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.595128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.595192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.595428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.595497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.595763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.595840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.596095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.596164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.596456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.596523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.596775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.596840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.597129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.597195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.597491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.597559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.597788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.597854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.598115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.598180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.598454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.598521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.598787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.598853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.599151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.599218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.599484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.599553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.599854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.599918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.600216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.600282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.600581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.600649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.600897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.072 [2024-11-25 13:28:06.600964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.072 qpair failed and we were unable to recover it. 00:29:09.072 [2024-11-25 13:28:06.601257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.601338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.601588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.601654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.601938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.602013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.602321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.602389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.602675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.602751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.603015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.603080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.603392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.603460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.603754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.603825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.604073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.604138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.604366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.604437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.604698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.604764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.604994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.605064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.605357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.605426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.605695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.605761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.606050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.606115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.606369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.606437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.606735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.606800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.607051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.607118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.607420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.607489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.607786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.607854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.608077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.608147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.608416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.608484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.608771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.608836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.609089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.609154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.609452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.609530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.609836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.609911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.610154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.073 [2024-11-25 13:28:06.610221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.073 qpair failed and we were unable to recover it. 00:29:09.073 [2024-11-25 13:28:06.610532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.610609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.610862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.610930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.611227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.611294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.611582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.611653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.611892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.611961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.612249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.612333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.612567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.612633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.612922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.612988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.613255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.613353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.613613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.613678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.613958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.614023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.614251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.614341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.614638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.614714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.615002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.615068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.615372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.615440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.615731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.615796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.616072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.616138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.616399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.616470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.616757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.616822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.617112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.617178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.617447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.617515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.617774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.617839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.618068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.618134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.618466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.618534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.618837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.618902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.619156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.619223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.619511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.619580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.619875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.619941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.620188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.620255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.620543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.620610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.620900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.620968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.074 [2024-11-25 13:28:06.621228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.074 [2024-11-25 13:28:06.621297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.074 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.621632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.621700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.621993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.622058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.622365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.622433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.622722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.622794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.623100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.623165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.623386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.623467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.623772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.623838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.624124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.624190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.624473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.624541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.624845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.624910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.625150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.625216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.625503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.625570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.625825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.625891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.626158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.626226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.626514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.626592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.626808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.626873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.627125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.627194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.627464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.627532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.627799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.627868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.628155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.628222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.628542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.628620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.628928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.628994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.629225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.629291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.629571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.629647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.629906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.629973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.630233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.630299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.630560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.630637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.630928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.630994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.631244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.631333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.631545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.631611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.631822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.631892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.075 [2024-11-25 13:28:06.632159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.075 [2024-11-25 13:28:06.632224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.075 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.632518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.632586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.632842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.632909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.633141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.633206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.633490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.633560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.633804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.633871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.634169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.634234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.634519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.634588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.634882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.634947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.635236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.635324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.635577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.635644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.635852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.635918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.636164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.636230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.636502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.636569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.636819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.636895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.637190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.637257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.637590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.637656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.637919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.637986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.638236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.638324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.638596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.638661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.638882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.638948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.639240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.639326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.639580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.639647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.639917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.639983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.076 [2024-11-25 13:28:06.640272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.076 [2024-11-25 13:28:06.640370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.076 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.640690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.640761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.640989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.641056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.641374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.641443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.641714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.641782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.642072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.642139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.642444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.642511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.642805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.642872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.643123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.643192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.643514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.643594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.643895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.643960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.644213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.644278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.644604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.644672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.644974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.645039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.645333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.645400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.645659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.645725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.646011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.646077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.646385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.646454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.646707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.646773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.647038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.647102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.647357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.647425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.647725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.647791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.648087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.648152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.648418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.648486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.648778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.648843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.649116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.649182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.649418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.649485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.077 [2024-11-25 13:28:06.649737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.077 [2024-11-25 13:28:06.649802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.077 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.650095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.650160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.650371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.650439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.650732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.650808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.651112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.651177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.651425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.651492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.651780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.651845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.652125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.652192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.652453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.652523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.652781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.652850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.653148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.653214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.653520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.653586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.653875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.653940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.654223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.654289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.654596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.654662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.654875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.654942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.655192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.655257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.655589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.655657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.655877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.655945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.656234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.656299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.656572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.656649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.656941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.657007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.657284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.657371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.657626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.657691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.657979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.658045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.658324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.658392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.658614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.658683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.658937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.078 [2024-11-25 13:28:06.659004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.078 qpair failed and we were unable to recover it. 00:29:09.078 [2024-11-25 13:28:06.659296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.659396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.659691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.659757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.660063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.660131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.660408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.660476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.660711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.660776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.661040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.661106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.661359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.661428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.661691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.661757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.662046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.662112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.662416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.662484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.662738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3286982 Killed "${NVMF_APP[@]}" "$@" 00:29:09.079 [2024-11-25 13:28:06.662804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.663095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.663162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:09.079 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:09.079 [2024-11-25 13:28:06.663457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.663526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.079 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.079 [2024-11-25 13:28:06.663819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.663896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.664187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.664253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.664520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.664587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.664809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.664876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.665128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.665198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.665447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.665517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.665805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.665864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.666026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.666061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.666334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.666397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.666581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.666618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.666864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.666931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.079 [2024-11-25 13:28:06.667232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.079 [2024-11-25 13:28:06.667268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.079 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.667406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.667445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.667622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.667665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.667855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.667892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.668124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.668189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.668415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.668452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.668635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.668702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3287437 00:29:09.080 [2024-11-25 13:28:06.668998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:09.080 [2024-11-25 13:28:06.669064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3287437 00:29:09.080 [2024-11-25 13:28:06.669322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.669372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b9 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3287437 ']' 00:29:09.080 0 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.080 [2024-11-25 13:28:06.669539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.669609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.080 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.080 [2024-11-25 13:28:06.669901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.080 [2024-11-25 13:28:06.669967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 13:28:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.080 [2024-11-25 13:28:06.670214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.670281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.670541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.670581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.670860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.670926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.671233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.671378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.671574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.671611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.671914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.671981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.672243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.672325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.672507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.672542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.672731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.672766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.672940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.672975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.673123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.673160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.673319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.673366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.080 qpair failed and we were unable to recover it. 00:29:09.080 [2024-11-25 13:28:06.673532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.080 [2024-11-25 13:28:06.673576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.673754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.673790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.673927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.673962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.674119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.674155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.674270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.674314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.674442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.674476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.674588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.674622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.674797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.674831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.674968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.675003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.675141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.675175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.675337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.675384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.675503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.675536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.675679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.675712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.675885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.675919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.676052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.676091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.676232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.676266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.676429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.676464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.676625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.676659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.676792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.676827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.676941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.676974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.677119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.677153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.677257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.677290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.677439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.677472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.677613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.677646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.677790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.677824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.677965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.677999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.678163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.081 [2024-11-25 13:28:06.678215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.081 qpair failed and we were unable to recover it. 00:29:09.081 [2024-11-25 13:28:06.678393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.678426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.678545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.678578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.678714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.678743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.678851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.678882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.678988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.679017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.679112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.679141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.679244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.679274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.679448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.679494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.679661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.679693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.679822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.679854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.679990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.680021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.680158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.680190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.680323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.680357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.680474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.680506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.680671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.680708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.680835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.680866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.681000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.681031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.681200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.681232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.681336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.681368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.681494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.681526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.681627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.681658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.681763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.681794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.082 [2024-11-25 13:28:06.681960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.082 [2024-11-25 13:28:06.681992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.082 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.682121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.682152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.682261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.682298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.682456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.682484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.682588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.682615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.682748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.682777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.682910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.682938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.683065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.683094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.683223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.683250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.683416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.683444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.683556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.683585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.683675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.683703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.683830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.683858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.683952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.683980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.684086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.684115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.684248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.684277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.684389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.684418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.684524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.684553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.684669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.684696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.684792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.684822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.684948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.684975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.685119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.685145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.685239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.685265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.685372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.685399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.685538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.685563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.685649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.685675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.685789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.685814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.685955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.083 [2024-11-25 13:28:06.685981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.083 qpair failed and we were unable to recover it. 00:29:09.083 [2024-11-25 13:28:06.686072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.686098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.686228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.686255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.686399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.686426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.686508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.686533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.686624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.686650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.686740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.686765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.686851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.686877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.687004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.687030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.687125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.687151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.687260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.687285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.687377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.687404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.687511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.687537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.687655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.687683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.687770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.687796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.687912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.687939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.688054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.688080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.688176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.688202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.688288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.688328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.688479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.688510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.688607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.688632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.688733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.688758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.688868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.688894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.689009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.689036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.689177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.689204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.689329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.689356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.690010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.690041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.690160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.690188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.690332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.690360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.690457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.690482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.690569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.690595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.690742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.690769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.690885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.690912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.691033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.084 [2024-11-25 13:28:06.691059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.084 qpair failed and we were unable to recover it. 00:29:09.084 [2024-11-25 13:28:06.691151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.691177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.691276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.691309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.691449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.691474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.691568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.691594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.691729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.691755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.691900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.691926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.692023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.692050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.692149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.692174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.692335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.692364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.692485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.692510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.692636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.692662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.692775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.692801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.692911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.692940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.693084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.693111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.693233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.693258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.693359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.693386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.693532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.693558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.693668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.693695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.693823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.693849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.693970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.693995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.694116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.694142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.694230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.694256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.694352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.694379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.694455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.694480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.694617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.694643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.694771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.694798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.694922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.694949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.695070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.695095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.695203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.695230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.695350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.695376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.695486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.695512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.695634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.695660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.695762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.695788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.695902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.695927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.696046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.696071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.696171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.696197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.696337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.696365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.696484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.085 [2024-11-25 13:28:06.696510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.085 qpair failed and we were unable to recover it. 00:29:09.085 [2024-11-25 13:28:06.696652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.696678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.696795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.696821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.696920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.696946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.697094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.697120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.697233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.697258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.697363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.697389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.697484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.697510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.697597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.697623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.697737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.697763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.697856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.697881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.697999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.698025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.698115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.698140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.698264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.698290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.698387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.698413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.698529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.698555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.698675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.698704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.698818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.698844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.698928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.698955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.699041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.699067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.699156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.699182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.699297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.699330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.699426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.699452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.699566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.699593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.699686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.699712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.699800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.699827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.699969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.699995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.700089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.700114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.700257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.700284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.700384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.700410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.700530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.700557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.700669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.700695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.700781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.700807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.700900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.700926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.701039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.701065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.086 [2024-11-25 13:28:06.701159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.086 [2024-11-25 13:28:06.701184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.086 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.701278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.701313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.701399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.701425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.701529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.701556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.701682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.701707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.701805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.701832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.701926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.701952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.702064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.702092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.702188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.702219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.702322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.702350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.702471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.702497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.702615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.702642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.702784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.702810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.702903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.702929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.703045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.703071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.703162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.703187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.703269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.703295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.703418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.703444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.703539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.703565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.703679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.703705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.703794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.703819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.703936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.703963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.704090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.704117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.704255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.704280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.704408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.704434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.704557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.704582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.704672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.704698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.704815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.704841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.704978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.705004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.705119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.705145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.705262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.705288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.705407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.705434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.705524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.705550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.705677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.705703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.705796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.705822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.705936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.372 [2024-11-25 13:28:06.705966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.372 qpair failed and we were unable to recover it. 00:29:09.372 [2024-11-25 13:28:06.706107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.706133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.706250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.706276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.706425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.706451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.706572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.706597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.706712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.706738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.706854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.706879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.706963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.706989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.707101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.707127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.707245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.707273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.707424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.707462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.707596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.707627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.707745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.707775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.707900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.707930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.708053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.708083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.708235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.708265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.708399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.708427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.708544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.708570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.708716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.708741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.708861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.708887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.708983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.709009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.709146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.709173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.709298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.709332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.709427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.709453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.709576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.709601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.709744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.709770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.709912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.709938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.710032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.710062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.710154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.710181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.710300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.710333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.710451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.710477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.710584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.710609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.710723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.710750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.710860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.710886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.710971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.710996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.711115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.711141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.711256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.711281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.711401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.711428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.711552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.373 [2024-11-25 13:28:06.711578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.373 qpair failed and we were unable to recover it. 00:29:09.373 [2024-11-25 13:28:06.711733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.711760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.711875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.711901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.712056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.712083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.712197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.712223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.712339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.712365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.712483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.712510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.712629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.712655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.712794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.712820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.712916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.712942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.713026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.713053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.713168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.713195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.713290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.713348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.713443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.713469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.713611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.713637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.713750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.713776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.713918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.713944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.714057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.714084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.714206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.714231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.714354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.714380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.714493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.714518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.714612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.714637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.714776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.714803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.714901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.714928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.715050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.715077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.715172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.715197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.715318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.715344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.715469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.715494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.715617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.715645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.715757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.715783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.715893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.715929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.716114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.716145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.716274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.716313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.716433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.716463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.716556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.716585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.716674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.716702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.716811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.716839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.716978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.717004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.717146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.374 [2024-11-25 13:28:06.717172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.374 qpair failed and we were unable to recover it. 00:29:09.374 [2024-11-25 13:28:06.717331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.717359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.717478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.717504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.717622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.717647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.717761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.717786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.717901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.717926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.718047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.718073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.718191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.718216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.718330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.718356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.718498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.718524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.718671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.718696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.718788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.718814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.718911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.718936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.719079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.719104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.719193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.719219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.719293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.719325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.719445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.719471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.719561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.719587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.719672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.719697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.719837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.719872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.719979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.720010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.720117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.720158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.720286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.720326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.720452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.720479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.720599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.720626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.720768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.720795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.720907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.720934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.721054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.721081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.721168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.721196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.721290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.721331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.721450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.721477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.721615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.721640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.721735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.721760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.721867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.721892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.722030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.722056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.722137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.722162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.722260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.722286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.722443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.722470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.375 qpair failed and we were unable to recover it. 00:29:09.375 [2024-11-25 13:28:06.722555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.375 [2024-11-25 13:28:06.722580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.722689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.722715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.722829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.722855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.722934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.722960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.723051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.723077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.723158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.723184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.723297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.723328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.723456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.723481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.723572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.723602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.723605] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:29:09.376 [2024-11-25 13:28:06.723691] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.376 [2024-11-25 13:28:06.723745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.723772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.723871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.723898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.724012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.724038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.724182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.724210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.724310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.724338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.724453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.724481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.724568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.724594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.724737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.724763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.724841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.724866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.724953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.724979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.725079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.725105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.725192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.725224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.725364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.725391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.725512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.725538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.725653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.725679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.725805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.725832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.725951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.725978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.726085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.726111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.726228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.726256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.726387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.726413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.726554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.726580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.726686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.726712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.726808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.376 [2024-11-25 13:28:06.726835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.376 qpair failed and we were unable to recover it. 00:29:09.376 [2024-11-25 13:28:06.726964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.726994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.727116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.727144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.727247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.727275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.727403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.727431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.727582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.727610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.727730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.727758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.727844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.727871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.727966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.727993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.728117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.728145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.728256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.728284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.728406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.728432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.728529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.728555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.728652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.728678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.728795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.728821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.728943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.728971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.729086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.729116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.729213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.729240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.729363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.729390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.729502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.729528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.729648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.729674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.729765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.729792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.729936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.729963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.730086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.730112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.730212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.730240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.730360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.730387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.730478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.730505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.730624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.730650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.730740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.730767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.730858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.730884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.731006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.731033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.731153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.731180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.731300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.731331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.731460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.731487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.731624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.731651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.731764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.731791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.731936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.731963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.732083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.732110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.377 [2024-11-25 13:28:06.732200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.377 [2024-11-25 13:28:06.732227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.377 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.732312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.732340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.732427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.732454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.732570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.732596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.732680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.732707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.732822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.732853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.732950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.732976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.733091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.733118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.733204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.733230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.733348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.733375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.733487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.733513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.733596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.733623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.733744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.733770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.733880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.733907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.734014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.734041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.734156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.734183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.734270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.734296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.734443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.734469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.734612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.734638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.734783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.734809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.734923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.734950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.735077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.735103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.735234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.735261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.735359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.735386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.735513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.735540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.735655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.735683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.735794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.735820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.735904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.735931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.736068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.736095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.736213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.736239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.736357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.736385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.736513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.736553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.736678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.736707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.736859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.736888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.737029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.737056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.737150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.737177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.737297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.737337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.737438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.737465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.378 [2024-11-25 13:28:06.737601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.378 [2024-11-25 13:28:06.737628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.378 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.737727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.737756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.737873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.737900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.738011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.738038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.738124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.738152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.738296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.738332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.738447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.738474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.738598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.738625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.738744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.738771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.738886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.738914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.739038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.739065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.739183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.739210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.739321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.739348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.739469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.739496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.739632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.739659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.739778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.739806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.739898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.739925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.740017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.740044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.740160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.740188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.740315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.740344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.740460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.740487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.740575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.740606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.740728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.740755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.740878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.740905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.741048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.741075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.741201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.741228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.741339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.741367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.741485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.741511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.741605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.741632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.741741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.741767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.741881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.741908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.742034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.742061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.742180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.742207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.742321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.742348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.742437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.742463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.742561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.742589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.742727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.742754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.742873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.379 [2024-11-25 13:28:06.742902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.379 qpair failed and we were unable to recover it. 00:29:09.379 [2024-11-25 13:28:06.743045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.743073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.743156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.743183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.743267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.743294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.743388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.743416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.743525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.743552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.743654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.743680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.743822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.743849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.743988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.744014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.744100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.744128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.744251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.744279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.744439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.744467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.744589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.744617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.744708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.744735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.744831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.744858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.744954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.744981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.745126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.745153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.745235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.745262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.745371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.745399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.745538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.745564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.745678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.745705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.745799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.745827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.745939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.745966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.746046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.746072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.746189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.746223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.746367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.746395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.746515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.746542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.746639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.746666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.746753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.746780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.746893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.746920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.747041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.747069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.747156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.747184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.747283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.747333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.747501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.747540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.747667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.747695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.747810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.747837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.747930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.747957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.748068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.748095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.748199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.380 [2024-11-25 13:28:06.748227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.380 qpair failed and we were unable to recover it. 00:29:09.380 [2024-11-25 13:28:06.748362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.748390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.748501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.748529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.748666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.748693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.748814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.748841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.748984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.749011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.749108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.749137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.749265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.749292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.749423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.749451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.749546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.749575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.749716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.749743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.749854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.749881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.749996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.750024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.750164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.750191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.750315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.750342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.750428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.750456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.750575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.750603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.750710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.750737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.750827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.750854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.750937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.750964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.751073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.751100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.751208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.751235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.751314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.751342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.751456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.751484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.751583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.751609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.751691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.751718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.751809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.751841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.751954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.751981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.752124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.752151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.752247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.752275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.752399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.752427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.752541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.752568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.752687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.752714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.752809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.752836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.381 [2024-11-25 13:28:06.752948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.381 [2024-11-25 13:28:06.752975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.381 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.753085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.753112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.753234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.753261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.753410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.753438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.753579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.753606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.753723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.753750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.753900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.753927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.754070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.754097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.754219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.754246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.754366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.754394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.754493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.754520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.754665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.754691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.754788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.754815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.754953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.754980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.755098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.755126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.755274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.755309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.755425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.755452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.755545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.755572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.755666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.755693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.755819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.755846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.755937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.755964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.756087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.756114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.756211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.756240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.756321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.756353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.756477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.756505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.756597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.756625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.756765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.756792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.756916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.756943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.757038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.757065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.757150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.757177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.757312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.757340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.757440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.757468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.757583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.757614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.757762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.757789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.757931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.757958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.758051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.758078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.758169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.758196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.382 [2024-11-25 13:28:06.758314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.382 [2024-11-25 13:28:06.758342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.382 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.758437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.758464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.758553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.758582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.758678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.758705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.758816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.758843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.758941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.758969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.759081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.759108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.759197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.759224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.759318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.759347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.759444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.759471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.759593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.759620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.759721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.759749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.759868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.759895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.759981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.760007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.760100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.760128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.760243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.760271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.760376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.760405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.760494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.760522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.760611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.760637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.760728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.760755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.760879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.760907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.761032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.761059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.761163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.761205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.761333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.761363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.761481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.761508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.761600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.761627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.761775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.761802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.761915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.761942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.762083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.762109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.762230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.762257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.762412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.762439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.762561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.762587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.762683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.762711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.762857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.762883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.762973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.763003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.763125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.763161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.763309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.763337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.763454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.383 [2024-11-25 13:28:06.763481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.383 qpair failed and we were unable to recover it. 00:29:09.383 [2024-11-25 13:28:06.763597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.763624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.763746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.763773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.763918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.763945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.764090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.764117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.764233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.764260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.764388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.764417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.764537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.764565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.764689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.764716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.764811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.764838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.764932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.764960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.765078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.765105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.765231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.765258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.765386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.765414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.765537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.765564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.765678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.765706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.765818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.765847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.765965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.765992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.766140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.766167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.766292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.766326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.766421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.766448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.766549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.766577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.766660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.766689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.766799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.766826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.766935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.766962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.767099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.767136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.767273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.767309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.767463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.767494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.767659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.767692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.767845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.767878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.767987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.768027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.768125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.768155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.768262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.768291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.768410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.768440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.768572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.768602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.769318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.769360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.769495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.769526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.769655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.769686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.769813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.384 [2024-11-25 13:28:06.769848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.384 qpair failed and we were unable to recover it. 00:29:09.384 [2024-11-25 13:28:06.769947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.769977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.770101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.770132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.773319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.773367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.773505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.773537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.773699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.773730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.773850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.773880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.773977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.774007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.774113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.774142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.774271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.774301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.774422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.774451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.774550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.774586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.774707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.774735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.774936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.774966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.775094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.775123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.775239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.775267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.775368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.775396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.775525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.775553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.775713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.775745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.775861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.775891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.775985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.776015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.776105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.776135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.776260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.776290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.776437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.776467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.776594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.776624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.776738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.776768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.776898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.776929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.777020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.777049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.777151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.777182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.777319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.777350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.777465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.777505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.777625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.777664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.777798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.777827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.777945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.777972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.385 [2024-11-25 13:28:06.778114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.385 [2024-11-25 13:28:06.778140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.385 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.778250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.778291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.778420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.778450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.778572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.778600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.778690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.778718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.778835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.778862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.778955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.778988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.779140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.779168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.779287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.779325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.779423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.779450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.779532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.779559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.779670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.779697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.779810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.779838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.779975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.780003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.780130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.780157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.780239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.780268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.780412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.780440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.780583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.780610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.780723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.780750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.780891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.780918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.781043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.781070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.781192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.781220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.781347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.781375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.781472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.781499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.781603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.781630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.781743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.781770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.781885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.781912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.782038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.782066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.782194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.782234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.782367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.782397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.782520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.782547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.782674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.782701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.782825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.782852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.783003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.783031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.783158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.783186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.783327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.783366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.783454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.783481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.386 [2024-11-25 13:28:06.783580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.386 [2024-11-25 13:28:06.783608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.386 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.783720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.783748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.783839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.783867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.783995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.784022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.784135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.784162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.784285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.784319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.784446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.784472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.784596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.784624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.784713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.784742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.784829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.784862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.784955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.784983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.785135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.785163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.785337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.785378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.785502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.785530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.785668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.785696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.785815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.785842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.785954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.785981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.786118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.786145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.786261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.786288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.786427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.786454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.786567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.786594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.786682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.786709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.786811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.786840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.786937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.786964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.787086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.787113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.787246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.787275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.787403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.787430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.787515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.787541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.787658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.787687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.787778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.787806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.787947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.787974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.788071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.788099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.788218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.788245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.788367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.788408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.788551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.788581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.788703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.788731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.788860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.788889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.387 qpair failed and we were unable to recover it. 00:29:09.387 [2024-11-25 13:28:06.788986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.387 [2024-11-25 13:28:06.789014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.789140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.789167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.789287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.789320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.789422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.789449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.789586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.789613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.789730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.789765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.789853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.789878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.789999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.790025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.790152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.790178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.790293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.790326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.790429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.790455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.790547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.790578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.790693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.790726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.790818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.790845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.790962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.790990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.791078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.791107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.791192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.791221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.791319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.791358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.791453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.791480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.791586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.791613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.791736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.791763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.791878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.791906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.792000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.792028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.792126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.792155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.792264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.792291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.792420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.792447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.792579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.792606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.792721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.792747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.792840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.792866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.792950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.792977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.793097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.793124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.793249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.793276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.793403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.793442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.793537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.793562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.793682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.793709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.793823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.793852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.793939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.793966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.794056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.388 [2024-11-25 13:28:06.794083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.388 qpair failed and we were unable to recover it. 00:29:09.388 [2024-11-25 13:28:06.794166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.794196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.794286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.794323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.794454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.794481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.794592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.794620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.794727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.794755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.794883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.794910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.795021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.795048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.795135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.795163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.795313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.795352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.795460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.795487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.795602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.795631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.795715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.795742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.795860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.795890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.795970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.795997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.796116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.796144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.796269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.796297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.796461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.796494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.796622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.796649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.796748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.796776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.796897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.796924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.797013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.797041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.797144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.797173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.797284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.797320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.797439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.797466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.797548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.797578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.797695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.797723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.797811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.797839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.797931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.797958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.798050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.798078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.798192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.798222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.798331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.798366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.798455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.798482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.798631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.798658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.798746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.798773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.798866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.798893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.798977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.799004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.799092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.799119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.389 [2024-11-25 13:28:06.799234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.389 [2024-11-25 13:28:06.799261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.389 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.799347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.799374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.799465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.799492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.799612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.799639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.799747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.799779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.799892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.799920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.800033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.800061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.800145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.800172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.800285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.800322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.800444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.800471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.800559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.800587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.800675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.800703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.800824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.800851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.800957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.800984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.801103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.801131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.801218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.801246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.801376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.801403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.801495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.801521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.801645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.801672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.801750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.801777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.801891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.801918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.802031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.802058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.802200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.802227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.802352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.802381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.802482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.802509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.802607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.802634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.802726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.802754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.802866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.802894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.802986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.803014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.803124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.803152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.803267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.803294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.803424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.803452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.803537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.803572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.390 qpair failed and we were unable to recover it. 00:29:09.390 [2024-11-25 13:28:06.803665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.390 [2024-11-25 13:28:06.803693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.803806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.803834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.803928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.803956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.804073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.804101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.804246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.804273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.804408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.804438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.804559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.804586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.804701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.804729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.804840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.804866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.804959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.804985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.805102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.805130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.805235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.805267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.805399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.805426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.805524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.805551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.805671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.805700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.805845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.805873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.805990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.806017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.806152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.806180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.806298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.806343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.806465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.806491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.806576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.806602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.806714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.806741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.806882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.806909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.807036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.807064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.807162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.807189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.807285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.807321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.807413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.807440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.807555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.807582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.807733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.807760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.807842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.807870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.807953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.807980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.808070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.808097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.808237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.808264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.808390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.808386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.391 [2024-11-25 13:28:06.808417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.808562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.808595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.808688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.808716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.808834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.808861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.808949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.391 [2024-11-25 13:28:06.808976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.391 qpair failed and we were unable to recover it. 00:29:09.391 [2024-11-25 13:28:06.809088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.809115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.809203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.809229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.809325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.809363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.809439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.809466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.809548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.809574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.809702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.809729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.809812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.809841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.809953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.809980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.810091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.810118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.810211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.810239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.810385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.810412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.810556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.810583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.810666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.810693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.810814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.810842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.810929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.810956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.811045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.811073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.811195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.811223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.811345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.811373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.811491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.811517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.811639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.811666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.811776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.811803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.811887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.811914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.812025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.812052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.812153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.812181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.812323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.812358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.812450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.812476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.812566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.812598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.812718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.812745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.812832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.812859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.812937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.812964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.813042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.813069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.813177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.813204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.813294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.813336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.813433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.813461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.813556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.813589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.813714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.813741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.813884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.392 [2024-11-25 13:28:06.813911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.392 qpair failed and we were unable to recover it. 00:29:09.392 [2024-11-25 13:28:06.813997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.814026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.814150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.814177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.814261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.814289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.814401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.814428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.814509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.814535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.814624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.814651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.814744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.814772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.814873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.814900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.815008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.815050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.815160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.815191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.815316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.815344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.815466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.815494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.815620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.815649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.815736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.815763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.815876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.815903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.816025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.816053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.816144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.816172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.816289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.816326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.816417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.816446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.816543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.816570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.816685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.816714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.816826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.816853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.816947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.816975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.817091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.817119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.817204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.817232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.817350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.817379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.817524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.817552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.817636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.817664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.817797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.817826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.817918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.817952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.818062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.818091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.818216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.818244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.818328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.818356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.818472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.818499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.818593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.818621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.818713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.818740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.818854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.818881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.393 [2024-11-25 13:28:06.818997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.393 [2024-11-25 13:28:06.819025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.393 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.819120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.819148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.819248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.819275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.819395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.819424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.819517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.819546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.819670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.819698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.819793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.819822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.819963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.819991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.820084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.820112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.820215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.820243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.820383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.820411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.820505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.820532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.820625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.820652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.820766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.820794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.820937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.820966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.821083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.821111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.821228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.821256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.821382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.821413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.821556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.821583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.821706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.821734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.821815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.821844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.821940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.821968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.822076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.822103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.822217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.822245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.822369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.822398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.822518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.822547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.822675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.822703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.822793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.822821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.822942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.822971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.823120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.823148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.823237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.823264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.823357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.823386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.823513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.823550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.823697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.823726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.823820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.823849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.823942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.823970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.824065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.824092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.394 [2024-11-25 13:28:06.824263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.394 [2024-11-25 13:28:06.824312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.394 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.824448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.824477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.824597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.824625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.824720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.824748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.824865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.824893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.824979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.825007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.825130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.825159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.825276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.825311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.825411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.825439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.825533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.825561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.825681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.825710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.825830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.825858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.825993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.826021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.826145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.826173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.826260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.826288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.826386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.826415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.826530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.826558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.826672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.826700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.826841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.826869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.826958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.826986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.827109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.827137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.827242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.827269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.827403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.827432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.827547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.827575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.827666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.827694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.827792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.827819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.827937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.827965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.828082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.828110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.828226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.828253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.828338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.828366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.828506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.828534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.828618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.828646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.828763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.828791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.395 [2024-11-25 13:28:06.828908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.395 [2024-11-25 13:28:06.828935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.395 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.829052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.829080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.829168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.829202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.829314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.829356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.829460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.829490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.829578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.829607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.829726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.829755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.829873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.829901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.829988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.830015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.830143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.830171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.830290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.830330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.830420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.830449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.830542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.830570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.830658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.830687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.830773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.830803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.830940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.830969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.831103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.831131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.831226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.831255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.831353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.831383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.831579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.831607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.831724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.831753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.831842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.831871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.831981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.832009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.832100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.832129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.832220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.832250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.832389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.832419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.832563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.832592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.832685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.832714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.832805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.832833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.832925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.832954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.833064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.833093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.833284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.833320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.833409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.833438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.833554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.833583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.833705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.833733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.833924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.833952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.834077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.834118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.834240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.396 [2024-11-25 13:28:06.834269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.396 qpair failed and we were unable to recover it. 00:29:09.396 [2024-11-25 13:28:06.834393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.834422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.834540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.834568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.834691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.834720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.834840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.834867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.835010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.835047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.835148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.835175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.835266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.835299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.835427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.835456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.835600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.835628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.835747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.835775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.835888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.835916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.836029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.836056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.836171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.836199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.836290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.836326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.836420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.836448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.836554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.836582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.836697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.836725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.836814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.836842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.836930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.836958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.837081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.837110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.837223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.837250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.837340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.837368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.837463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.837491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.837601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.837628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.837711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.837739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.837933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.837961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.838098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.838127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.838244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.838272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.838393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.838422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.838543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.838571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.838694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.838723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.838808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.838838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.838924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.838952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.839062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.839090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.839204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.839231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.839323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.839352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.839505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.839532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.839626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.397 [2024-11-25 13:28:06.839654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.397 qpair failed and we were unable to recover it. 00:29:09.397 [2024-11-25 13:28:06.839770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.839798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.839906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.839933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.840026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.840054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.840162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.840190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.840277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.840310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.840399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.840427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.840546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.840578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.840719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.840746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.840848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.840876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.840992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.841019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.841132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.841160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.841250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.841278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.841380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.841409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.841529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.841557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.841673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.841701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.841783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.841812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.841907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.841935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.842023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.842050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.842174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.842201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.842283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.842319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.842440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.842468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.842585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.842614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.842711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.842739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.842830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.842859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.842941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.842968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.843107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.843134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.843253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.843281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.843384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.843412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.843501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.843529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.843647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.843674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.843790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.843818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.843900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.843927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.844019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.844046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.844178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.844221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.844328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.844358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.844474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.844501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.844600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.844628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.844709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.398 [2024-11-25 13:28:06.844736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.398 qpair failed and we were unable to recover it. 00:29:09.398 [2024-11-25 13:28:06.844863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.844904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.845028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.845057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.845172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.845200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.845311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.845339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.845424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.845452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.845543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.845571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.845720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.845747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.845837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.845865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.845978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.846005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.846102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.846132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.846246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.846272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.846367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.846395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.846517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.846543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.846630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.846658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.846748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.846775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.846862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.846892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.847025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.847067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.847161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.847191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.847314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.847343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.847460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.847488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.847585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.847613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.847732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.847760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.847861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.847890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.848008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.848035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.848148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.848176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.848292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.848328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.848426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.848453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.848562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.848590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.848697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.848725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.848808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.848835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.848951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.848978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.849073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.849100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.849241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.849268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.849394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.849424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.399 qpair failed and we were unable to recover it. 00:29:09.399 [2024-11-25 13:28:06.849595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.399 [2024-11-25 13:28:06.849636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.849787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.849821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.849911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.849938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.850058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.850086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.850185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.850211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.850339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.850368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.850490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.850519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.850606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.850633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.850724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.850752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.850869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.850898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.850993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.851021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.851118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.851147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.851263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.851291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.851393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.851419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.851528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.851556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.851651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.851678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.851760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.851788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.851909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.851936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.852027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.852054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.852176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.852203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.852287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.852322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.852444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.852471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.852585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.852612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.852721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.852749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.852831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.852858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.852951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.852980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.853122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.853149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.853269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.853297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.853399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.853427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.853508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.853535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.853649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.853678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.853795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.853822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.853917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.853945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.854029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.854057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.854152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.854180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.854292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.854328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.854430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.854460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.854579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.854607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.400 qpair failed and we were unable to recover it. 00:29:09.400 [2024-11-25 13:28:06.854749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.400 [2024-11-25 13:28:06.854777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.854871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.854899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.854988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.855017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.855129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.855162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.855279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.855316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.855444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.855472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.855565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.855593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.855690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.855718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.855807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.855835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.855949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.855977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.856108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.856150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.856296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.856334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.856426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.856454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.856573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.856601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.856680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.856708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.856794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.856822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.856967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.856995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.857124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.857152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.857290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.857336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.857436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.857464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.857550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.857577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.857658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.857685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.857795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.857822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.857910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.857936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.858072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.858100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.858190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.858218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.858309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.858338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.858430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.858459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.858579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.858610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.858752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.858781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.858872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.858906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.859027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.859055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.859146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.859174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.859318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.859347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.859464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.859492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.859589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.859619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.859756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.859785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.859906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.401 [2024-11-25 13:28:06.859934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.401 qpair failed and we were unable to recover it. 00:29:09.401 [2024-11-25 13:28:06.860050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.860078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.860202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.860243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.860376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.860407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.860525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.860553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.860641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.860668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.860786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.860813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.860923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.860950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.861033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.861060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.861197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.861225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.861317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.861345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.861428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.861454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.861535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.861562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.861677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.861704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.861782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.861808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.861921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.861947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.862090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.862118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.862227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.862254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.862348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.862377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.862498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.862525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.862641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.862673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.862766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.862792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.862881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.862908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.863024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.863050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.863132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.863160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.863269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.863296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.863395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.863423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.863560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.863587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.863703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.863731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.863839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.863866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.863958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.863985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.864072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.864099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.864193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.864221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.864312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.864339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.864485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.864513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.864633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.864661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.864750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.864777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.864862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.864889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.865001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.402 [2024-11-25 13:28:06.865029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.402 qpair failed and we were unable to recover it. 00:29:09.402 [2024-11-25 13:28:06.865113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.865140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.865259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.865287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.865506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.865547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.865643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.865673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.865816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.865845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.865957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.865985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.866078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.866106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.866196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.866224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.866301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.866341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.866437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.866463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.866550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.866578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.866696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.866722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.866803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.866831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.866922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.866948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.867079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.867109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.867219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.867260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.867367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.867397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.867519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.867549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.867643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.867671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.867788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.867816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.867900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.867927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.868018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.868046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.868164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.868191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.868280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.868317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.868457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.868485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.868575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.868602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.868720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.868746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.868875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.868901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.869053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.869080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.869167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.869197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.869334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.869362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.869452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.869480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.869587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.869615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.869725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.869752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.869846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.869875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.870001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.870029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.870128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.870154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.870248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.403 [2024-11-25 13:28:06.870275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.403 qpair failed and we were unable to recover it. 00:29:09.403 [2024-11-25 13:28:06.870371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.870397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.870491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.870517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.870633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.870659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.870781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.870808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.870916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.870943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.871043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.871070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.871160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.871189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.871312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.871340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.871458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.871487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.871626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.871654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.871744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.871771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.871891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.871919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.872029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.872058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.872174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.872201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.872285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.872316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.872413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.872440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.872555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.872584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.872667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.872694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.872791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.872818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.872916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.872956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.873086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.873203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.873317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.873443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.873567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.873687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.873791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.873901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.873994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.874021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.874105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.874133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.874181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.404 [2024-11-25 13:28:06.874220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.404 [2024-11-25 13:28:06.874235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.404 [2024-11-25 13:28:06.874248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.404 [2024-11-25 13:28:06.874244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.874258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.404 [2024-11-25 13:28:06.874269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.874390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.404 [2024-11-25 13:28:06.874415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.404 qpair failed and we were unable to recover it. 00:29:09.404 [2024-11-25 13:28:06.874530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.874555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.874668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.874692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.874779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.874803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.874908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.874934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.875082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.875112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.875199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.875228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.875349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.875378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.875457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.875484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.875682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.875709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.875825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.875852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.875966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.875994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.875961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:09.405 [2024-11-25 13:28:06.876013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:09.405 [2024-11-25 13:28:06.876083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.876079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:09.405 [2024-11-25 13:28:06.876110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.876063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:09.405 [2024-11-25 13:28:06.876202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.876227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.876342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.876367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.876468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.876493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.876582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.876606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.876734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.876763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.876856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.876886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.876970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.876997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.877096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.877124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.877239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.877267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.877373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.877401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.877494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.877523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.877609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.877637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.877747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.877775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.877861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.877890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.878003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.878029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.878115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.878143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.878223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.878250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.878350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.878397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.878494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.878523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.878619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.878647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.878737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.878764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.878859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.878887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.879089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.879117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.879204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.879233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.879324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.405 [2024-11-25 13:28:06.879353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.405 qpair failed and we were unable to recover it. 00:29:09.405 [2024-11-25 13:28:06.879445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.879473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.879562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.879589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.879682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.879708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.879798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.879827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.879920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.879947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.880035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.880062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.880181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.880209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.880299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.880337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.880428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.880457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.880549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.880576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.880671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.880699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.880813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.880840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.880932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.880960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.881055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.881083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.881172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.881198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.881324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.881352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.881439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.881466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.881584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.881612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.881694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.881721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.881811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.881841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.881930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.881957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.882091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.882133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.882238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.882265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.882368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.882395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.882475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.882503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.882596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.882623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.882712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.882740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.882850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.882878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.882966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.882993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.883086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.883113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.883203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.883231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.883349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.883377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.883517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.883544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.883668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.883696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.883803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.883831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.883928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.883957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.406 [2024-11-25 13:28:06.884056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.406 [2024-11-25 13:28:06.884097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.406 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.884215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.884243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.884368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.884396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.884488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.884515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.884601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.884629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.884717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.884743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.884828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.884856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.884948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.884976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.885092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.885120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.885206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.885234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.885329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.885371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.885465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.885494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.885587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.885617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.885712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.885740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.885831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.885860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.885955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.885985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.886100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.886128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.886230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.886258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.886347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.886375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.886494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.886522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.886631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.886659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.886748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.886776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.886892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.886920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.887010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.887047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.887157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.887185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.887274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.887317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.887402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.887429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.887538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.887566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.887683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.887711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.887788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.887816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.887931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.887961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.888048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.888076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.888172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.888201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.888295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.888332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.888419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.888447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.888586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.888614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.888703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.888731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.888847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.407 [2024-11-25 13:28:06.888875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.407 qpair failed and we were unable to recover it. 00:29:09.407 [2024-11-25 13:28:06.889000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.889028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.889111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.889139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.889232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.889260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.889358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.889386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.889478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.889506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.889611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.889651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.889778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.889807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.889907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.889933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.890047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.890163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.890275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.890402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.890521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.890643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.890756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.890885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.890976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.891004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.891138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.891180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.891300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.891337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.891428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.891457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.891551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.891577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.891661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.891688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.891772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.891798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.891894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.891922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.892036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.892167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.892281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.892438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.892554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.892660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.892786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.892906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.892992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.893022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.893103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.408 [2024-11-25 13:28:06.893131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.408 qpair failed and we were unable to recover it. 00:29:09.408 [2024-11-25 13:28:06.893252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.893280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.893383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.893411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.893502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.893529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.893614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.893641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.893721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.893747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.893858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.893885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.893992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.894019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.894108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.894138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.894257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.894285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.894385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.894416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.894511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.894539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.894660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.894688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.894773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.894801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.894919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.894947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.895035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.895061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.895147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.895175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.895283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.895328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.895420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.895448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.895526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.895553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.895661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.895690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.895799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.895826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.895919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.895947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.896085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.896113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.896203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.896231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.896336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.896365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.896459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.896487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.896577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.896606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.896729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.896757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.896842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.896869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.896966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.896994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.897090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.897132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.897227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.897256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.897367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.897396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.897490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.897519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.897640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.897667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.897749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.897777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.897866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.409 [2024-11-25 13:28:06.897894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.409 qpair failed and we were unable to recover it. 00:29:09.409 [2024-11-25 13:28:06.897982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.898014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.898127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.898156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.898269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.898297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.898395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.898424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.898520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.898548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.898641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.898670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.898768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.898797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.898883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.898921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.899012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.899045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.899136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.899163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.899245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.899273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.899377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.899405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.899520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.899549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.899665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.899694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.899814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.899843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.899934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.899963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.900045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.900073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.900195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.900236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.900329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.900357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.900480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.900507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.900619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.900646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.900731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.900758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.900854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.900882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.901030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.901166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.901282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.901405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.901554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.901667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.901781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.901905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.901992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.902020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.902141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.902168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.902263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.902291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.902412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.902439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.902547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.902588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.902682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.902711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.410 qpair failed and we were unable to recover it. 00:29:09.410 [2024-11-25 13:28:06.902805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.410 [2024-11-25 13:28:06.902833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.902947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.902976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.903065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.903092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.903210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.903238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.903340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.903369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.903492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.903522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.903611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.903639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.903736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.903764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.903848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.903875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.903965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.903993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.904085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.904113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.904200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.904229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.904323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.904352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.904470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.904498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.904595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.904624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.904708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.904737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.904864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.904894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.904988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.905128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.905245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.905375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.905490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.905601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.905709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.905831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.905948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.905976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.906061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.906090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.906169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.906197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.906332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.906361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.906480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.906509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.906630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.906658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.906750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.906778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.906909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.906937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.907055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.907084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.907174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.907202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.907284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.907321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.907442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.411 [2024-11-25 13:28:06.907470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.411 qpair failed and we were unable to recover it. 00:29:09.411 [2024-11-25 13:28:06.907552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.907579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.907671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.907706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.907786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.907815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.907914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.907943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.908034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.908061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.908147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.908175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.908269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.908297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.908409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.908437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.908553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.908581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.908678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.908707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.908805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.908834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.908950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.908977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.909066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.909093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.909203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.909231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.909314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.909343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.909443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.909470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.909551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.909578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.909689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.909716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.909798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.909824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.909936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.909965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.910046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.910075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.910190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.910219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.910334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.910363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.910452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.910481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.910622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.910649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.910733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.910761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.910847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.910874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.910963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.910991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.911116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.911147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.911224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.911251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.911368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.911410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.911507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.911536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.911626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.911654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.911729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.911756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.911855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.911885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.911975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.912004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.912088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.912118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.912202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.412 [2024-11-25 13:28:06.912230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.412 qpair failed and we were unable to recover it. 00:29:09.412 [2024-11-25 13:28:06.912327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.912355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.912471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.912498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.912612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.912639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.912715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.912742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.912835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.912863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.912945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.912972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.913057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.913084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.913169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.913197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.913277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.913314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.913396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.913424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.913516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.913544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.913635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.913664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.913749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.913777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.913864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.913892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.914043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.914152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.914296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.914428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.914546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.914669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.914780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.914896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.914980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.915009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.915133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.915161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.915247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.915275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.915367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.915396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.915515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.915543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.915640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.915668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.915752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.915780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.915893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.915922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.916023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.916051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.916161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.916203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.916320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.916350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.413 [2024-11-25 13:28:06.916473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.413 [2024-11-25 13:28:06.916502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.413 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.916595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.916623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.916707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.916735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.916818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.916845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.916986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.917013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.917129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.917156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.917252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.917279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.917371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.917401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.917495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.917523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.917612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.917640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.917718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.917746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.917864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.917893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.917985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.918013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.918107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.918135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.918247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.918273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.918366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.918394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.918475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.918501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.918600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.918627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.918736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.918762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.918852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.918881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.918976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.919005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.919108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.919136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.919253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.919281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.919374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.919402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.919486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.919518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.919597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.919625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.919740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.919768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.919885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.919913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.919999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.920027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.920120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.920148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.920267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.920295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.920385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.920412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.920495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.920522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.920636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.920663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.920774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.920801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.920909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.920936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.921052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.921081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.921178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.921218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.414 [2024-11-25 13:28:06.921322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.414 [2024-11-25 13:28:06.921351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.414 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.921448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.921475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.921590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.921617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.921730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.921758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.921842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.921868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.921958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.921985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.922078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.922104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.922194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.922222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.922345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.922374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.922467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.922494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.922609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.922638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.922721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.922750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.922861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.922888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.922982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.923015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.923105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.923132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.923217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.923243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.923362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.923390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.923503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.923530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.923615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.923642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.923753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.923779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.923858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.923885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.923983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.924113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.924228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.924349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.924455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.924597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.924714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.924820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.924929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.924956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.925055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.925096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.925197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.925228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.925317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.925347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.925428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.925455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.925546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.925574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.925652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.925679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.925781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.925810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.925922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.925950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.926070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.926098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.926178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.415 [2024-11-25 13:28:06.926205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-11-25 13:28:06.926288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.926326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.926419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.926446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.926537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.926564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.926652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.926680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.926792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.926820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.926909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.926936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.927021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.927048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.927149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.927189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.927285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.927323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.927439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.927468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.927566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.927595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.927697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.927725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.927813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.927841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.927955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.927983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.928085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.928113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.928239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.928266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.928358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.928388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.928471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.928498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.928578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.928606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.928682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.928709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.928796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.928824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.928908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.928935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.929022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.929049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.929147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.929189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.929287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.929322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.929442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.929470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.929588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.929616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.929764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.929791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.929891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.929918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.930033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.930060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.930147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.930175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.930294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.930326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.930431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.930459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.930536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.930563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.930684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.930711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.930824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.930851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.930932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.930960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.931049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.931075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.931174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.931202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.931335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.931363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.931455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.931483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-11-25 13:28:06.931581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.416 [2024-11-25 13:28:06.931608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.931697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.931724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.931816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.931843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.931922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.931949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.932025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.932051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.932167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.932194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.932283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.932315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.932413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.932441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.932553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.932579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.932700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.932726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.932823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.932850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.932956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.932982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.933095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.933123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.933208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.933235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.933328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.933356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.933440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.933467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.933554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.933581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.933679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.933706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.933795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.933822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.933912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.933938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.934016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.934042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.934125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.934152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.934233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.934259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.934368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.934410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.934619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.934648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.934739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.934767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.934857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.934885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.934983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.935012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.935128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.935156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.935238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.935266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.935363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.935391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.935533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.935561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.935644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.935671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.935758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.935787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.935867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.935895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.935990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.936018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.936107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.417 [2024-11-25 13:28:06.936135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-11-25 13:28:06.936217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.936246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.936438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.936466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.936585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.936613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.936732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.936761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.936836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.936864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.936978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.937005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.937129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.937158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.937243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.937272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.937370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.937398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.937478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.937505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.937623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.937650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.937734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.937760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.937866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.937894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.938009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.938037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.938154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.938182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.938294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.938329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.938416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.938447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.938557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.938584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.938660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.938686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.938766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.938792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.938908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.938935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.939028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.939055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.939143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.939170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.939251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.939277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.939380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.939410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.939522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.939550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.939645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.939671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.939757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.939785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.939894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.418 [2024-11-25 13:28:06.939921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.418 qpair failed and we were unable to recover it. 00:29:09.418 [2024-11-25 13:28:06.940004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.940031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.940244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.940285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.940407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.940437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.940530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.940557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.940750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.940778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.940905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.940933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.941047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.941075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.941164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.941191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.941309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.941337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.941457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.941484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.941610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.941638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.941722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.941750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.941838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.941865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.941948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.941976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.942055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.942086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.942200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.942227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.942318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.942347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.942448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.942475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.942586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.942613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.942730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.942757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.942845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.942873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.942963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.942992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.943079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.943107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.943193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.943221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.943320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.943349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.943427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.943456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.943578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.943606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.943684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.943712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.943818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.943846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.943958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.943986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.944100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.944128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.944209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.944236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.944435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.944464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.944554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.944582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.944705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.944733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.944820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.944847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.945038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.945066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.945187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.945216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.419 qpair failed and we were unable to recover it. 00:29:09.419 [2024-11-25 13:28:06.945307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.419 [2024-11-25 13:28:06.945334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.945430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.945457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.945571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.945597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.945675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.945707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.945794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.945821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.945934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.945962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.946079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.946191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.946308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.946424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.946532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.946647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.946772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.946911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.946997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.947024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.947122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.947149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.947267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.947295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.947394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.947421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.947535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.947562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.947654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.947681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.947784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.947812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.947896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.947923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.948064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.948178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.948284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.948411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.948529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.948644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.948786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.948905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.948991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.949023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.949112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.949138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.949224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.949250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.949352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.949379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.949522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.949549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.949649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.949675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.949789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.949816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.420 [2024-11-25 13:28:06.949934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.420 [2024-11-25 13:28:06.949960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.420 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.950077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.950217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.950335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.950446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.950552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.950673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.950790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.950904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.950984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.951100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.951250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.951368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.951484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.951598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.951706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.951853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.951967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.951993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.952112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.952139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.952232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.952258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.952346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.952382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.952473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.952499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.952587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.952614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.952705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.952732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.952840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.952866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.952959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.952985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.953072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.953099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.953221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.953247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.953389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.953417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.953535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.953561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.953646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.953673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.953774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.953801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.953894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.953921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.954031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.954057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.954162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.954205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.954329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.954360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.954474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.954502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.954702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.954731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.421 [2024-11-25 13:28:06.954820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.421 [2024-11-25 13:28:06.954849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.421 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.954962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.954990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.955073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.955101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.955193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.955221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.955312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.955341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.955433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.955459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.955573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.955600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.955679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.955706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.955783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.955809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.955892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.955923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.956043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.956157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.956275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.956392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.956508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.956618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.956755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.956870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.956986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.957095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.957210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.957320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.957439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.957552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.957671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.957778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.957900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.957928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.958019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.958045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.958131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.958158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.958234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.958261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.958358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.958385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.422 [2024-11-25 13:28:06.958479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.422 [2024-11-25 13:28:06.958505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.422 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.958584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.958610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.958705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.958731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.958839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.958865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.958976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.959084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.959223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.959364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.959475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.959614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.959728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.959864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.959968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.959995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.960102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.960129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.960224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.960252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.960342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.960370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.960457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.960485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.960576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.960604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.960685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.960712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.960825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.960856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.960943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.960970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.961058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.961086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.961206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.961233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.961319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.961347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.961439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.961467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.961556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.961583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.961675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.961702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.961780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.961807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.961887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.961913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.962009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.962037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.962154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.962181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.962274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.962300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.962412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.962438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.962581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.962609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.962696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.962724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.962815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.962842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.962963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.962991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.963100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.963142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.423 [2024-11-25 13:28:06.963275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.423 [2024-11-25 13:28:06.963313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.423 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.963432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.963461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.963577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.963604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.963696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.963725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.963817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.963845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.963935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.963965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.964084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.964112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.964195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.964222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.964322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.964351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.964447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.964476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.964568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.964596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.964687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.964715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.964858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.964886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.964980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.965007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.965118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.965146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.965233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.965261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.965349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.965377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.965468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.965496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.965589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.965617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.965748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.965776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.965895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.965923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.966016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.966045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.966164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.966192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.966324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.966352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.966448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.966476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.966562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.966591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.966682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.966711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.966815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.966843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.966957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.966984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.967064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.967091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.967177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.967205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.967330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.967358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.967473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.967501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.967584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.967613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.967706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.967735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.967818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.967846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.967957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.967985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.968070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.968098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.424 [2024-11-25 13:28:06.968189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.424 [2024-11-25 13:28:06.968217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.424 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.968317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.968346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.968433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.968462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.968552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.968580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.968669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.968697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.968826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.968853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.968932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.968960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.969042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.969070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.969190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.969218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.969311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.969340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.969426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.969460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.969544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.969572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.969664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.969692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.969804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.969832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.969939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.969967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.970077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.970105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.970191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.970219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.970318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.970346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.970438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.970466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.970545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.970573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.970662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.970690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.970807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.970835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.970914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.970941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.971027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.971055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.971150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.971178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.971260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.971287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.971408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.971436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.971528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.971555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.971639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.971667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.971793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.971821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.971904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.971932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.972097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.972125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.972214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.972242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.972321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.972350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.972428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.972455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.972547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.972574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.972655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.972683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.972775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.425 [2024-11-25 13:28:06.972804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.425 qpair failed and we were unable to recover it. 00:29:09.425 [2024-11-25 13:28:06.972892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.972920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.973001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.973028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.973139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.973166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.973263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.973292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.973421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.973448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.973543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.973571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.973655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.973684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.973802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.973830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.973913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.973941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.974027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.974054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.974139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.974167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.974250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.974277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.974407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.974465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.974598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.974646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.974771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.974814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.974969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.975011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83e8000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.975221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.975250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.975337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.975366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.975453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.975481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.975566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.975593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.975709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.975737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.975850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.975878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.975960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.975988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.976125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.976153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.976232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.976260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.976347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.976374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.976493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.976521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.976608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.976636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.976749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.976777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.976888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.976915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.976997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.977025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.977135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.977163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.977283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.977317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.977420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.977449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.977528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.977555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.977641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.977668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.977799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.977827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.977941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.426 [2024-11-25 13:28:06.977969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.426 qpair failed and we were unable to recover it. 00:29:09.426 [2024-11-25 13:28:06.978055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.978082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.978177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.978204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.978282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.978329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.978445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.978473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.978565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.978594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.978680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.978709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.978825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.978855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.978978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.979006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.979084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.979111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.979223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.979250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.979375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.979404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.979521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.979549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.979636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.979664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.979746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.979773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.979855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.979889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.980005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.980034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.980115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.980143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.980259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.980287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.980386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.980415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.980528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.980556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.980639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.980667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.980773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.980801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.980892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.980920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.981007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.981036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.981122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.981150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.981243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.981270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.981360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.981387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.981477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.981504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.981597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.981627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.981717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.981744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.981861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.427 [2024-11-25 13:28:06.981889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.427 qpair failed and we were unable to recover it. 00:29:09.427 [2024-11-25 13:28:06.982004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.982125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.982234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.982362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.982480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.982596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.982702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.982819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.982936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.982964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.983095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.983124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.983269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.983297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.983414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.983442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.983525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.983553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.983637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.983664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.983776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.983804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.983888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.983916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.984039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.984066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.984156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.984184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.984296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.984338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.984432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.984460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.984547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.984575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.984674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.984702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.984789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.984817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.984905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.984938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.985018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.985046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.985145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.985172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.985258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.985285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.985396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.985424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.985509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.985537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.985726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.985754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.985863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.985890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.986001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.986029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.986113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.986140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.986230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.986259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.986374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.986403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.986492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.986521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.986643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.986671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.986762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.428 [2024-11-25 13:28:06.986791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.428 qpair failed and we were unable to recover it. 00:29:09.428 [2024-11-25 13:28:06.986880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.986908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.987097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.987125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.987216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.987244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.987361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.987389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.987477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.987506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.987626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.987654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.987768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.987796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.987916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.987943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.988040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.988068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.988158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.988186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.988313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.988340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.988425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.988453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.988551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.988579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.988702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.988730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.988816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.988845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.988927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.988955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.989055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.989082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.989161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.989188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.989300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.989334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.989420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.989448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.989563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.989591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.989703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.989730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.989809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.989837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.989922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.989950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.990063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.990090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.990210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.990242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.990341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.990369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.990488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.990515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.990643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.990672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.990756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.990784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.990925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.990952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.991044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.991072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.991161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.991189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.991328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.991356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.991446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.991474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.991563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.991591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.991670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.991698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.429 [2024-11-25 13:28:06.991840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.429 [2024-11-25 13:28:06.991868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.429 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.991956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.991985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.992102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.992130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.992241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.992269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.992388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.992416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.992509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.992537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.992635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.992662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.992759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.992787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.992879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.992907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.993023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.993051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.993188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.993216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.993328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.993357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.993474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.993502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.993592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.993620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.993711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.993739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.993849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.993877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.993994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.994022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.994105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.994134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.994212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.994240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.994327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.994355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.994476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.994505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.994592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.994620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.994733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.994761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.994868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.994896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.995012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.995041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.995125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.995153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.995260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.995288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.995447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.995475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.995586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.995618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.995707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.995735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.995830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.995858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.995941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.995969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.996053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.996080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.996191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.996219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.996330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.996358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.996439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.996466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.996553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.996580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.996668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.996695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.996783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.430 [2024-11-25 13:28:06.996810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.430 qpair failed and we were unable to recover it. 00:29:09.430 [2024-11-25 13:28:06.996901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.996927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.997016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.997045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.997137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.997165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.997283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.997319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.997412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.997440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.997529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.997558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.997670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.997698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.997816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.997844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.997928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.997956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.998070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.998097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.998177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.998204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.998297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.998332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.998415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.998443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.998521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.998549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.998663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.998691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.998771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.998798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.998905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.998933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.999035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.999062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.999149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.999176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.999264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.999299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.999472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.999515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.999643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.999672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.999782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.999810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:06.999931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:06.999960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.000045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.000073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.000210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.000238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.000363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.000392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.000502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.000529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.000616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.000644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.000782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.000815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.000928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.000956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.001047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.001075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.001158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.001185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.001272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.001300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.001404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.001431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.001518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.001545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.001627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.001653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.001766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.431 [2024-11-25 13:28:07.001793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.431 qpair failed and we were unable to recover it. 00:29:09.431 [2024-11-25 13:28:07.001886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-11-25 13:28:07.001912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-11-25 13:28:07.002000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-11-25 13:28:07.002027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-11-25 13:28:07.002113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-11-25 13:28:07.002140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-11-25 13:28:07.002251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-11-25 13:28:07.002279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 A controller has encountered a failure and is being reset. 00:29:09.432 [2024-11-25 13:28:07.002407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-11-25 13:28:07.002449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83f4000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-11-25 13:28:07.002561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-11-25 13:28:07.002595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.432 qpair failed and we were unable to recover it. 00:29:09.432 [2024-11-25 13:28:07.002706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.432 [2024-11-25 13:28:07.002745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f83ec000b90 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.002853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.002884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.002975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.003003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.003095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.003123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.003204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.003232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.003332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.003360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.003465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.003492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.003609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.003636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.003726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.003753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.003845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.003873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.003982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.004010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.004097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.004125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.004215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.004247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.004341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.004368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.004500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.004527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.004620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.004647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.004761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.004788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.004868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.004894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.004987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.005014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.005101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.005128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.005212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.005238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.005321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.005349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.005476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.005504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.005597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.005624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.005715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.005743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.005858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.005884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.005981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.006008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.006095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.691 [2024-11-25 13:28:07.006121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.691 qpair failed and we were unable to recover it. 00:29:09.691 [2024-11-25 13:28:07.006199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.006225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.006313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.006342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.006423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.006450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.006587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.006614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.006695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.006722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.006803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.006829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.006912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.006938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.007049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.007075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.007162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.007189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.007311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.007339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.007448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.007475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.007593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.007624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.007735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.007762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.008036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.008148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.008296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.008418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.008529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.008636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.008771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.008887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.008980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.009008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.692 [2024-11-25 13:28:07.009119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.009147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.009227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.009254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.009346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:09.692 [2024-11-25 13:28:07.009374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.009478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.009506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.009623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.009650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.009731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.692 [2024-11-25 13:28:07.009758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.009848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.009875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.009960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.009988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.692 [2024-11-25 13:28:07.010109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.010136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.010214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.010243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.692 [2024-11-25 13:28:07.010326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.010353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.010432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.010460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.010545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.010571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.010690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.010717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.692 qpair failed and we were unable to recover it. 00:29:09.692 [2024-11-25 13:28:07.010799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.692 [2024-11-25 13:28:07.010830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.010909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.010936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.011960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.011987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.012101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.012128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.012214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.012243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.012326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.012354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b9fa0 with addr=10.0.0.2, port=4420 00:29:09.693 qpair failed and we were unable to recover it. 00:29:09.693 [2024-11-25 13:28:07.012522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.693 [2024-11-25 13:28:07.012571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19c7f30 with addr=10.0.0.2, port=4420 00:29:09.693 [2024-11-25 13:28:07.012594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c7f30 is same with the state(6) to be set 00:29:09.693 [2024-11-25 13:28:07.012621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c7f30 (9): Bad file descriptor 00:29:09.693 [2024-11-25 13:28:07.012641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:09.693 [2024-11-25 13:28:07.012657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:09.693 [2024-11-25 13:28:07.012674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:09.693 Unable to reset the controller. 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.693 Malloc0 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.693 [2024-11-25 13:28:07.078712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.693 [2024-11-25 13:28:07.106974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.693 13:28:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3287012 00:29:10.624 Controller properly reset. 00:29:15.878 Initializing NVMe Controllers 00:29:15.878 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.878 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:15.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:15.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:15.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:15.878 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:15.878 Initialization complete. Launching workers. 00:29:15.878 Starting thread on core 1 00:29:15.878 Starting thread on core 2 00:29:15.878 Starting thread on core 3 00:29:15.878 Starting thread on core 0 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:15.878 00:29:15.878 real 0m10.693s 00:29:15.878 user 0m34.187s 00:29:15.878 sys 0m7.235s 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:15.878 ************************************ 00:29:15.878 END TEST nvmf_target_disconnect_tc2 00:29:15.878 ************************************ 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.878 13:28:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.878 rmmod nvme_tcp 00:29:15.878 rmmod nvme_fabrics 00:29:15.878 rmmod nvme_keyring 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3287437 ']' 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3287437 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3287437 ']' 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3287437 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3287437 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3287437' 00:29:15.878 killing process with pid 3287437 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3287437 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3287437 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.878 13:28:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.781 13:28:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.781 00:29:17.781 real 0m15.822s 00:29:17.781 user 0m59.974s 00:29:17.781 sys 0m9.865s 00:29:17.781 13:28:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.781 13:28:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:17.781 ************************************ 00:29:17.781 END TEST nvmf_target_disconnect 00:29:17.781 ************************************ 00:29:17.781 13:28:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:17.781 00:29:17.781 real 5m5.358s 00:29:17.781 user 11m4.868s 00:29:17.781 sys 1m15.710s 00:29:17.781 13:28:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.781 13:28:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.781 ************************************ 00:29:17.781 END TEST nvmf_host 00:29:17.781 ************************************ 00:29:17.781 13:28:15 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:17.781 13:28:15 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:17.781 13:28:15 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:17.781 13:28:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:17.781 13:28:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.781 13:28:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.039 ************************************ 00:29:18.039 START TEST nvmf_target_core_interrupt_mode 00:29:18.039 ************************************ 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:18.039 * Looking for test storage... 00:29:18.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.039 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.040 --rc genhtml_branch_coverage=1 00:29:18.040 --rc genhtml_function_coverage=1 00:29:18.040 --rc genhtml_legend=1 00:29:18.040 --rc geninfo_all_blocks=1 00:29:18.040 --rc geninfo_unexecuted_blocks=1 00:29:18.040 00:29:18.040 ' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.040 --rc genhtml_branch_coverage=1 00:29:18.040 --rc genhtml_function_coverage=1 00:29:18.040 --rc genhtml_legend=1 00:29:18.040 --rc geninfo_all_blocks=1 00:29:18.040 --rc geninfo_unexecuted_blocks=1 00:29:18.040 00:29:18.040 ' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.040 --rc genhtml_branch_coverage=1 00:29:18.040 --rc genhtml_function_coverage=1 00:29:18.040 --rc genhtml_legend=1 00:29:18.040 --rc geninfo_all_blocks=1 00:29:18.040 --rc geninfo_unexecuted_blocks=1 00:29:18.040 00:29:18.040 ' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:18.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.040 --rc genhtml_branch_coverage=1 00:29:18.040 --rc genhtml_function_coverage=1 00:29:18.040 --rc genhtml_legend=1 00:29:18.040 --rc geninfo_all_blocks=1 00:29:18.040 --rc geninfo_unexecuted_blocks=1 00:29:18.040 00:29:18.040 ' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:18.040 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:18.041 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:18.041 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:18.041 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.041 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:18.041 ************************************ 00:29:18.041 START TEST nvmf_abort 00:29:18.041 ************************************ 00:29:18.041 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:18.041 * Looking for test storage... 00:29:18.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.041 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:18.041 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:29:18.041 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.299 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:18.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.300 --rc genhtml_branch_coverage=1 00:29:18.300 --rc genhtml_function_coverage=1 00:29:18.300 --rc genhtml_legend=1 00:29:18.300 --rc geninfo_all_blocks=1 00:29:18.300 --rc geninfo_unexecuted_blocks=1 00:29:18.300 00:29:18.300 ' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:18.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.300 --rc genhtml_branch_coverage=1 00:29:18.300 --rc genhtml_function_coverage=1 00:29:18.300 --rc genhtml_legend=1 00:29:18.300 --rc geninfo_all_blocks=1 00:29:18.300 --rc geninfo_unexecuted_blocks=1 00:29:18.300 00:29:18.300 ' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:18.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.300 --rc genhtml_branch_coverage=1 00:29:18.300 --rc genhtml_function_coverage=1 00:29:18.300 --rc genhtml_legend=1 00:29:18.300 --rc geninfo_all_blocks=1 00:29:18.300 --rc geninfo_unexecuted_blocks=1 00:29:18.300 00:29:18.300 ' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:18.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.300 --rc genhtml_branch_coverage=1 00:29:18.300 --rc genhtml_function_coverage=1 00:29:18.300 --rc genhtml_legend=1 00:29:18.300 --rc geninfo_all_blocks=1 00:29:18.300 --rc geninfo_unexecuted_blocks=1 00:29:18.300 00:29:18.300 ' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.300 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.301 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.301 13:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.829 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:20.830 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:20.830 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:20.830 Found net devices under 0000:09:00.0: cvl_0_0 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:20.830 Found net devices under 0000:09:00.1: cvl_0_1 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:20.830 13:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:20.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:29:20.830 00:29:20.830 --- 10.0.0.2 ping statistics --- 00:29:20.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.830 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:29:20.830 00:29:20.830 --- 10.0.0.1 ping statistics --- 00:29:20.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.830 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:20.830 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3290252 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3290252 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3290252 ']' 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 [2024-11-25 13:28:18.118739] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:20.831 [2024-11-25 13:28:18.119867] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:29:20.831 [2024-11-25 13:28:18.119917] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.831 [2024-11-25 13:28:18.193515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:20.831 [2024-11-25 13:28:18.248117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.831 [2024-11-25 13:28:18.248172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.831 [2024-11-25 13:28:18.248200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:20.831 [2024-11-25 13:28:18.248211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:20.831 [2024-11-25 13:28:18.248220] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.831 [2024-11-25 13:28:18.249734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.831 [2024-11-25 13:28:18.249798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.831 [2024-11-25 13:28:18.249802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.831 [2024-11-25 13:28:18.333103] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:20.831 [2024-11-25 13:28:18.333197] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:20.831 [2024-11-25 13:28:18.333203] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:20.831 [2024-11-25 13:28:18.333465] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 [2024-11-25 13:28:18.390537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 Malloc0 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 Delay0 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 [2024-11-25 13:28:18.462775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.831 13:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:21.089 [2024-11-25 13:28:18.613385] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:23.611 Initializing NVMe Controllers 00:29:23.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:23.611 controller IO queue size 128 less than required 00:29:23.611 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:23.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:23.611 Initialization complete. Launching workers. 00:29:23.611 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29240 00:29:23.611 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29301, failed to submit 66 00:29:23.611 success 29240, unsuccessful 61, failed 0 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.611 rmmod nvme_tcp 00:29:23.611 rmmod nvme_fabrics 00:29:23.611 rmmod nvme_keyring 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3290252 ']' 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3290252 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3290252 ']' 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3290252 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:23.611 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.612 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3290252 00:29:23.612 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.612 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.612 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3290252' 00:29:23.612 killing process with pid 3290252 00:29:23.612 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3290252 00:29:23.612 13:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3290252 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.612 13:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.516 00:29:25.516 real 0m7.434s 00:29:25.516 user 0m9.375s 00:29:25.516 sys 0m2.944s 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:25.516 ************************************ 00:29:25.516 END TEST nvmf_abort 00:29:25.516 ************************************ 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:25.516 ************************************ 00:29:25.516 START TEST nvmf_ns_hotplug_stress 00:29:25.516 ************************************ 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:25.516 * Looking for test storage... 00:29:25.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:29:25.516 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:25.799 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:25.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.800 --rc genhtml_branch_coverage=1 00:29:25.800 --rc genhtml_function_coverage=1 00:29:25.800 --rc genhtml_legend=1 00:29:25.800 --rc geninfo_all_blocks=1 00:29:25.800 --rc geninfo_unexecuted_blocks=1 00:29:25.800 00:29:25.800 ' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:25.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.800 --rc genhtml_branch_coverage=1 00:29:25.800 --rc genhtml_function_coverage=1 00:29:25.800 --rc genhtml_legend=1 00:29:25.800 --rc geninfo_all_blocks=1 00:29:25.800 --rc geninfo_unexecuted_blocks=1 00:29:25.800 00:29:25.800 ' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:25.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.800 --rc genhtml_branch_coverage=1 00:29:25.800 --rc genhtml_function_coverage=1 00:29:25.800 --rc genhtml_legend=1 00:29:25.800 --rc geninfo_all_blocks=1 00:29:25.800 --rc geninfo_unexecuted_blocks=1 00:29:25.800 00:29:25.800 ' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:25.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.800 --rc genhtml_branch_coverage=1 00:29:25.800 --rc genhtml_function_coverage=1 00:29:25.800 --rc genhtml_legend=1 00:29:25.800 --rc geninfo_all_blocks=1 00:29:25.800 --rc geninfo_unexecuted_blocks=1 00:29:25.800 00:29:25.800 ' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:25.800 13:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:27.700 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:27.700 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:27.700 Found net devices under 0000:09:00.0: cvl_0_0 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.700 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:27.701 Found net devices under 0000:09:00.1: cvl_0_1 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:27.701 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:27.958 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.958 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.958 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.958 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:29:27.959 00:29:27.959 --- 10.0.0.2 ping statistics --- 00:29:27.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.959 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:29:27.959 00:29:27.959 --- 10.0.0.1 ping statistics --- 00:29:27.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.959 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3292573 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3292573 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3292573 ']' 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.959 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:27.959 [2024-11-25 13:28:25.546438] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:27.959 [2024-11-25 13:28:25.547492] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:29:27.959 [2024-11-25 13:28:25.547560] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.217 [2024-11-25 13:28:25.622490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:28.217 [2024-11-25 13:28:25.680573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.217 [2024-11-25 13:28:25.680621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.217 [2024-11-25 13:28:25.680645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.217 [2024-11-25 13:28:25.680656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.217 [2024-11-25 13:28:25.680666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.217 [2024-11-25 13:28:25.682093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.217 [2024-11-25 13:28:25.682214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.217 [2024-11-25 13:28:25.682218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.217 [2024-11-25 13:28:25.768516] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:28.217 [2024-11-25 13:28:25.768752] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:28.217 [2024-11-25 13:28:25.768759] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:28.218 [2024-11-25 13:28:25.769024] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:28.218 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.218 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:28.218 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.218 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.218 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:28.218 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.218 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:28.218 13:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:28.476 [2024-11-25 13:28:26.066935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.476 13:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:28.733 13:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.991 [2024-11-25 13:28:26.607195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.991 13:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:29.249 13:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:29.814 Malloc0 00:29:29.814 13:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:29.814 Delay0 00:29:29.814 13:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.378 13:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:30.378 NULL1 00:29:30.378 13:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:30.635 13:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3292875 00:29:30.636 13:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:30.636 13:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:30.636 13:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:32.006 Read completed with error (sct=0, sc=11) 00:29:32.007 13:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.264 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.264 13:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:32.264 13:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:32.522 true 00:29:32.522 13:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:32.522 13:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.453 13:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.453 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:33.453 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:33.710 true 00:29:33.710 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:33.710 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.967 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.224 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:34.224 13:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:34.481 true 00:29:34.481 13:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:34.481 13:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.739 13:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.304 13:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:35.304 13:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:35.304 true 00:29:35.304 13:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:35.305 13:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.675 13:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.675 13:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:36.675 13:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:36.932 true 00:29:36.932 13:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:36.932 13:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.189 13:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.446 13:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:37.446 13:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:37.704 true 00:29:37.704 13:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:37.704 13:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.961 13:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.218 13:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:38.218 13:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:38.475 true 00:29:38.475 13:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:38.475 13:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.408 13:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.666 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:39.924 13:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:39.924 13:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:40.181 true 00:29:40.181 13:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:40.181 13:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.439 13:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.697 13:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:40.697 13:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:40.954 true 00:29:40.954 13:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:40.954 13:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.884 13:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.884 13:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:41.884 13:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:42.141 true 00:29:42.141 13:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:42.141 13:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.398 13:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.655 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:42.655 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:42.911 true 00:29:42.912 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:42.912 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.476 13:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.476 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:43.476 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:43.733 true 00:29:43.733 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:43.733 13:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:44.663 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.921 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:44.922 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:45.179 true 00:29:45.179 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:45.179 13:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.743 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.743 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:45.743 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:46.000 true 00:29:46.000 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:46.000 13:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.931 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:47.188 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:47.188 13:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:47.445 true 00:29:47.445 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:47.445 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.702 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.959 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:47.959 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:48.216 true 00:29:48.216 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:48.216 13:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.148 13:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:49.405 13:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:49.405 13:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:49.663 true 00:29:49.663 13:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:49.663 13:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.920 13:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.177 13:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:50.177 13:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:50.434 true 00:29:50.434 13:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:50.434 13:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.365 13:28:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.622 13:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:51.622 13:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:51.878 true 00:29:51.879 13:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:51.879 13:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.136 13:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.700 13:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:52.700 13:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:52.700 true 00:29:52.700 13:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:52.700 13:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.957 13:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.212 13:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:53.212 13:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:53.469 true 00:29:53.726 13:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:53.726 13:28:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.657 13:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.913 13:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:54.913 13:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:55.171 true 00:29:55.171 13:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:55.171 13:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.427 13:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.684 13:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:55.684 13:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:55.977 true 00:29:55.977 13:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:55.977 13:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.274 13:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.532 13:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:56.532 13:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:56.789 true 00:29:56.789 13:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:56.789 13:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:57.722 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.979 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:57.979 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:58.236 true 00:29:58.236 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:58.236 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.493 13:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.751 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:58.751 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:59.008 true 00:29:59.008 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:29:59.008 13:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.939 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:00.196 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:00.196 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:00.453 true 00:30:00.453 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:30:00.453 13:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.711 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.968 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:00.968 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:01.225 true 00:30:01.225 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:30:01.225 13:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.156 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.156 Initializing NVMe Controllers 00:30:02.156 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.156 Controller IO queue size 128, less than required. 00:30:02.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.156 Controller IO queue size 128, less than required. 00:30:02.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.156 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:02.156 Initialization complete. Launching workers. 00:30:02.156 ======================================================== 00:30:02.156 Latency(us) 00:30:02.156 Device Information : IOPS MiB/s Average min max 00:30:02.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 797.90 0.39 78357.05 3448.53 1045716.92 00:30:02.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9421.67 4.60 13585.46 3049.48 539106.39 00:30:02.156 ======================================================== 00:30:02.156 Total : 10219.57 4.99 18642.55 3049.48 1045716.92 00:30:02.156 00:30:02.156 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:02.156 13:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:02.414 true 00:30:02.414 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3292875 00:30:02.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3292875) - No such process 00:30:02.414 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3292875 00:30:02.414 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.671 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:02.930 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:02.930 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:02.930 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:02.930 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:02.930 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:03.188 null0 00:30:03.446 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:03.446 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:03.446 13:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:03.705 null1 00:30:03.705 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:03.705 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:03.705 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:03.963 null2 00:30:03.963 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:03.963 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:03.963 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:04.221 null3 00:30:04.221 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:04.221 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:04.221 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:04.479 null4 00:30:04.479 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:04.479 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:04.479 13:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:04.738 null5 00:30:04.738 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:04.738 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:04.738 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:04.995 null6 00:30:04.996 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:04.996 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:04.996 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:05.254 null7 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.254 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3297015 3297016 3297018 3297020 3297022 3297024 3297026 3297028 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.255 13:29:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:05.513 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:05.513 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:05.513 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.513 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:05.513 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:05.513 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:05.513 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:05.513 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.771 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:06.029 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.029 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:06.029 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:06.029 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:06.029 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:06.029 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:06.029 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:06.029 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.287 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.545 13:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:06.803 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:06.803 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.803 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:06.803 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:06.803 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:06.803 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:06.803 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:06.803 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.061 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:07.319 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.320 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:07.320 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:07.320 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:07.320 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:07.320 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:07.320 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:07.320 13:29:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.578 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:07.837 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:07.837 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.837 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:07.837 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:07.837 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:07.837 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:07.837 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:07.837 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:08.095 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.095 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.095 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:08.095 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.095 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.095 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.353 13:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:08.612 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.612 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:08.612 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:08.612 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:08.612 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:08.612 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:08.612 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:08.612 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.870 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:09.128 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:09.128 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:09.128 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.128 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:09.128 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:09.128 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:09.128 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:09.128 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.386 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.387 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:09.387 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.387 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.387 13:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:09.645 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.645 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:09.645 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:09.645 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:09.645 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:09.645 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:09.645 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:09.645 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:09.902 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.902 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.902 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:09.902 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.902 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.902 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:10.160 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.160 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.160 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.160 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.160 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:10.160 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.161 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:10.417 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:10.417 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.417 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:10.417 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:10.417 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:10.417 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:10.417 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.417 13:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.674 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.675 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:10.675 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.675 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.675 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:10.932 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.932 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:10.932 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:10.932 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:10.932 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:10.932 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:10.932 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:10.932 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.189 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.190 rmmod nvme_tcp 00:30:11.190 rmmod nvme_fabrics 00:30:11.190 rmmod nvme_keyring 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3292573 ']' 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3292573 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3292573 ']' 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3292573 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.190 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3292573 00:30:11.447 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:11.447 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:11.447 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3292573' 00:30:11.447 killing process with pid 3292573 00:30:11.447 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3292573 00:30:11.447 13:29:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3292573 00:30:11.447 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.447 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.709 13:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.613 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:13.613 00:30:13.613 real 0m48.046s 00:30:13.613 user 3m21.028s 00:30:13.613 sys 0m22.179s 00:30:13.613 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:13.613 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:13.613 ************************************ 00:30:13.613 END TEST nvmf_ns_hotplug_stress 00:30:13.613 ************************************ 00:30:13.613 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:13.614 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:13.614 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.614 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:13.614 ************************************ 00:30:13.614 START TEST nvmf_delete_subsystem 00:30:13.614 ************************************ 00:30:13.614 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:13.614 * Looking for test storage... 00:30:13.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:13.614 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:13.614 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:30:13.614 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.873 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:13.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.874 --rc genhtml_branch_coverage=1 00:30:13.874 --rc genhtml_function_coverage=1 00:30:13.874 --rc genhtml_legend=1 00:30:13.874 --rc geninfo_all_blocks=1 00:30:13.874 --rc geninfo_unexecuted_blocks=1 00:30:13.874 00:30:13.874 ' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:13.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.874 --rc genhtml_branch_coverage=1 00:30:13.874 --rc genhtml_function_coverage=1 00:30:13.874 --rc genhtml_legend=1 00:30:13.874 --rc geninfo_all_blocks=1 00:30:13.874 --rc geninfo_unexecuted_blocks=1 00:30:13.874 00:30:13.874 ' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:13.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.874 --rc genhtml_branch_coverage=1 00:30:13.874 --rc genhtml_function_coverage=1 00:30:13.874 --rc genhtml_legend=1 00:30:13.874 --rc geninfo_all_blocks=1 00:30:13.874 --rc geninfo_unexecuted_blocks=1 00:30:13.874 00:30:13.874 ' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:13.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.874 --rc genhtml_branch_coverage=1 00:30:13.874 --rc genhtml_function_coverage=1 00:30:13.874 --rc genhtml_legend=1 00:30:13.874 --rc geninfo_all_blocks=1 00:30:13.874 --rc geninfo_unexecuted_blocks=1 00:30:13.874 00:30:13.874 ' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.874 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.875 13:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:16.409 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:16.409 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.409 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:16.410 Found net devices under 0000:09:00.0: cvl_0_0 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:16.410 Found net devices under 0000:09:00.1: cvl_0_1 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:30:16.410 00:30:16.410 --- 10.0.0.2 ping statistics --- 00:30:16.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.410 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:30:16.410 00:30:16.410 --- 10.0.0.1 ping statistics --- 00:30:16.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.410 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3299899 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3299899 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3299899 ']' 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.410 13:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.410 [2024-11-25 13:29:13.817744] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:16.410 [2024-11-25 13:29:13.818843] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:30:16.410 [2024-11-25 13:29:13.818920] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.410 [2024-11-25 13:29:13.892184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:16.410 [2024-11-25 13:29:13.946640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.410 [2024-11-25 13:29:13.946698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.410 [2024-11-25 13:29:13.946721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.410 [2024-11-25 13:29:13.946731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.410 [2024-11-25 13:29:13.946742] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.410 [2024-11-25 13:29:13.948089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.410 [2024-11-25 13:29:13.948094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.411 [2024-11-25 13:29:14.032591] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:16.411 [2024-11-25 13:29:14.032686] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:16.411 [2024-11-25 13:29:14.032884] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:16.411 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.411 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:16.411 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:16.411 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:16.411 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.668 [2024-11-25 13:29:14.084914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.668 [2024-11-25 13:29:14.105126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.668 NULL1 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.668 Delay0 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.668 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.669 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.669 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3299923 00:30:16.669 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:16.669 13:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:16.669 [2024-11-25 13:29:14.181185] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:18.562 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:18.562 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.562 13:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 [2024-11-25 13:29:16.382509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa2b800d6c0 is same with the state(6) to be set 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 [2024-11-25 13:29:16.383977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa2b800d390 is same with the state(6) to be set 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 starting I/O failed: -6 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Write completed with error (sct=0, sc=8) 00:30:18.819 Read completed with error (sct=0, sc=8) 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 Write completed with error (sct=0, sc=8) 00:30:18.820 Write completed with error (sct=0, sc=8) 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 starting I/O failed: -6 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 Write completed with error (sct=0, sc=8) 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 [2024-11-25 13:29:16.384458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa2b8000c80 is same with the state(6) to be set 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 Read completed with error (sct=0, sc=8) 00:30:18.820 starting I/O failed: -6 00:30:18.820 starting I/O failed: -6 00:30:18.820 starting I/O failed: -6 00:30:18.820 starting I/O failed: -6 00:30:18.820 starting I/O failed: -6 00:30:18.820 starting I/O failed: -6 00:30:18.820 starting I/O failed: -6 00:30:18.820 starting I/O failed: -6 00:30:18.820 starting I/O failed: -6 00:30:19.804 [2024-11-25 13:29:17.360873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13429a0 is same with the state(6) to be set 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 [2024-11-25 13:29:17.383040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13414a0 is same with the state(6) to be set 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 [2024-11-25 13:29:17.386753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13412c0 is same with the state(6) to be set 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 [2024-11-25 13:29:17.387008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1341860 is same with the state(6) to be set 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Read completed with error (sct=0, sc=8) 00:30:19.804 Write completed with error (sct=0, sc=8) 00:30:19.804 [2024-11-25 13:29:17.387146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa2b800d060 is same with the state(6) to be set 00:30:19.804 Initializing NVMe Controllers 00:30:19.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.804 Controller IO queue size 128, less than required. 00:30:19.804 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:19.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:19.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:19.804 Initialization complete. Launching workers. 00:30:19.804 ======================================================== 00:30:19.805 Latency(us) 00:30:19.805 Device Information : IOPS MiB/s Average min max 00:30:19.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 183.58 0.09 969814.98 766.39 1011663.83 00:30:19.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 144.88 0.07 920912.64 1491.07 1011910.69 00:30:19.805 ======================================================== 00:30:19.805 Total : 328.47 0.16 948244.77 766.39 1011910.69 00:30:19.805 00:30:19.805 [2024-11-25 13:29:17.388445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13429a0 (9): Bad file descriptor 00:30:19.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:19.805 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.805 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:19.805 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3299923 00:30:19.805 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3299923 00:30:20.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3299923) - No such process 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3299923 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3299923 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3299923 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:20.370 [2024-11-25 13:29:17.909099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3300444 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3300444 00:30:20.370 13:29:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:20.370 [2024-11-25 13:29:17.968549] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:20.933 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:20.933 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3300444 00:30:20.933 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:21.497 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:21.497 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3300444 00:30:21.497 13:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:22.068 13:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:22.068 13:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3300444 00:30:22.068 13:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:22.326 13:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:22.326 13:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3300444 00:30:22.326 13:29:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:22.890 13:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:22.890 13:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3300444 00:30:22.890 13:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:23.454 13:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:23.454 13:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3300444 00:30:23.454 13:29:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:23.711 Initializing NVMe Controllers 00:30:23.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.711 Controller IO queue size 128, less than required. 00:30:23.711 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:23.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:23.711 Initialization complete. Launching workers. 00:30:23.711 ======================================================== 00:30:23.711 Latency(us) 00:30:23.711 Device Information : IOPS MiB/s Average min max 00:30:23.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1006081.78 1000259.00 1044184.98 00:30:23.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004554.27 1000169.76 1011939.98 00:30:23.711 ======================================================== 00:30:23.711 Total : 256.00 0.12 1005318.03 1000169.76 1044184.98 00:30:23.711 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3300444 00:30:23.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3300444) - No such process 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3300444 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.968 rmmod nvme_tcp 00:30:23.968 rmmod nvme_fabrics 00:30:23.968 rmmod nvme_keyring 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3299899 ']' 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3299899 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3299899 ']' 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3299899 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3299899 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3299899' 00:30:23.968 killing process with pid 3299899 00:30:23.968 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3299899 00:30:23.969 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3299899 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.227 13:29:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.131 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.131 00:30:26.131 real 0m12.578s 00:30:26.131 user 0m24.991s 00:30:26.131 sys 0m3.757s 00:30:26.131 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:26.131 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:26.131 ************************************ 00:30:26.131 END TEST nvmf_delete_subsystem 00:30:26.131 ************************************ 00:30:26.389 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:26.389 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:26.389 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:26.389 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.390 ************************************ 00:30:26.390 START TEST nvmf_host_management 00:30:26.390 ************************************ 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:26.390 * Looking for test storage... 00:30:26.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:26.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.390 --rc genhtml_branch_coverage=1 00:30:26.390 --rc genhtml_function_coverage=1 00:30:26.390 --rc genhtml_legend=1 00:30:26.390 --rc geninfo_all_blocks=1 00:30:26.390 --rc geninfo_unexecuted_blocks=1 00:30:26.390 00:30:26.390 ' 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:26.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.390 --rc genhtml_branch_coverage=1 00:30:26.390 --rc genhtml_function_coverage=1 00:30:26.390 --rc genhtml_legend=1 00:30:26.390 --rc geninfo_all_blocks=1 00:30:26.390 --rc geninfo_unexecuted_blocks=1 00:30:26.390 00:30:26.390 ' 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:26.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.390 --rc genhtml_branch_coverage=1 00:30:26.390 --rc genhtml_function_coverage=1 00:30:26.390 --rc genhtml_legend=1 00:30:26.390 --rc geninfo_all_blocks=1 00:30:26.390 --rc geninfo_unexecuted_blocks=1 00:30:26.390 00:30:26.390 ' 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:26.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.390 --rc genhtml_branch_coverage=1 00:30:26.390 --rc genhtml_function_coverage=1 00:30:26.390 --rc genhtml_legend=1 00:30:26.390 --rc geninfo_all_blocks=1 00:30:26.390 --rc geninfo_unexecuted_blocks=1 00:30:26.390 00:30:26.390 ' 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.390 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.391 13:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.918 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:28.919 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:28.919 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:28.919 Found net devices under 0000:09:00.0: cvl_0_0 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:28.919 Found net devices under 0000:09:00.1: cvl_0_1 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:30:28.919 00:30:28.919 --- 10.0.0.2 ping statistics --- 00:30:28.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.919 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:30:28.919 00:30:28.919 --- 10.0.0.1 ping statistics --- 00:30:28.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.919 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.919 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3302789 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3302789 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3302789 ']' 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.920 [2024-11-25 13:29:26.252987] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:28.920 [2024-11-25 13:29:26.254087] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:30:28.920 [2024-11-25 13:29:26.254143] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.920 [2024-11-25 13:29:26.327288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.920 [2024-11-25 13:29:26.389195] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.920 [2024-11-25 13:29:26.389244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.920 [2024-11-25 13:29:26.389271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.920 [2024-11-25 13:29:26.389283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.920 [2024-11-25 13:29:26.389292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.920 [2024-11-25 13:29:26.390872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.920 [2024-11-25 13:29:26.390933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.920 [2024-11-25 13:29:26.391000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:28.920 [2024-11-25 13:29:26.391003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.920 [2024-11-25 13:29:26.488374] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:28.920 [2024-11-25 13:29:26.488572] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:28.920 [2024-11-25 13:29:26.488869] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:28.920 [2024-11-25 13:29:26.489553] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:28.920 [2024-11-25 13:29:26.489805] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.920 [2024-11-25 13:29:26.539679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.920 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 Malloc0 00:30:29.178 [2024-11-25 13:29:26.619966] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3302832 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3302832 /var/tmp/bdevperf.sock 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3302832 ']' 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:29.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:29.178 { 00:30:29.178 "params": { 00:30:29.178 "name": "Nvme$subsystem", 00:30:29.178 "trtype": "$TEST_TRANSPORT", 00:30:29.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:29.178 "adrfam": "ipv4", 00:30:29.178 "trsvcid": "$NVMF_PORT", 00:30:29.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:29.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:29.178 "hdgst": ${hdgst:-false}, 00:30:29.178 "ddgst": ${ddgst:-false} 00:30:29.178 }, 00:30:29.178 "method": "bdev_nvme_attach_controller" 00:30:29.178 } 00:30:29.178 EOF 00:30:29.178 )") 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:29.178 13:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:29.178 "params": { 00:30:29.178 "name": "Nvme0", 00:30:29.178 "trtype": "tcp", 00:30:29.178 "traddr": "10.0.0.2", 00:30:29.178 "adrfam": "ipv4", 00:30:29.178 "trsvcid": "4420", 00:30:29.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.178 "hdgst": false, 00:30:29.178 "ddgst": false 00:30:29.178 }, 00:30:29.178 "method": "bdev_nvme_attach_controller" 00:30:29.178 }' 00:30:29.178 [2024-11-25 13:29:26.705415] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:30:29.178 [2024-11-25 13:29:26.705497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302832 ] 00:30:29.178 [2024-11-25 13:29:26.776014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.178 [2024-11-25 13:29:26.835830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.743 Running I/O for 10 seconds... 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:30:29.743 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:30.001 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:30.001 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:30.001 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:30.001 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:30.001 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.001 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=547 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 547 -ge 100 ']' 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:30.002 [2024-11-25 13:29:27.563709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e6a0 is same with the state(6) to be set 00:30:30.002 [2024-11-25 13:29:27.563767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e6a0 is same with the state(6) to be set 00:30:30.002 [2024-11-25 13:29:27.563789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166e6a0 is same with the state(6) to be set 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:30.002 [2024-11-25 13:29:27.569277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.002 [2024-11-25 13:29:27.569348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.569368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.002 [2024-11-25 13:29:27.569382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.569397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.002 [2024-11-25 13:29:27.569410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.569424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.002 [2024-11-25 13:29:27.569437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.569450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37a80 is same with the state(6) to be set 00:30:30.002 [2024-11-25 13:29:27.575547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.002 [2024-11-25 13:29:27.575911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.575982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.575995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 13:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:30.002 [2024-11-25 13:29:27.576093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.002 [2024-11-25 13:29:27.576393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.002 [2024-11-25 13:29:27.576407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.576979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.576993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.577520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.003 [2024-11-25 13:29:27.577534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.003 [2024-11-25 13:29:27.578741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:30.003 task offset: 81920 on job bdev=Nvme0n1 fails 00:30:30.003 00:30:30.003 Latency(us) 00:30:30.003 [2024-11-25T12:29:27.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.003 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:30.003 Job: Nvme0n1 ended in about 0.40 seconds with error 00:30:30.003 Verification LBA range: start 0x0 length 0x400 00:30:30.004 Nvme0n1 : 0.40 1603.58 100.22 160.36 0.00 35226.13 2427.26 34564.17 00:30:30.004 [2024-11-25T12:29:27.663Z] =================================================================================================================== 00:30:30.004 [2024-11-25T12:29:27.663Z] Total : 1603.58 100.22 160.36 0.00 35226.13 2427.26 34564.17 00:30:30.004 [2024-11-25 13:29:27.580625] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:30.004 [2024-11-25 13:29:27.580655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37a80 (9): Bad file descriptor 00:30:30.004 [2024-11-25 13:29:27.584261] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3302832 00:30:30.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3302832) - No such process 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:30.934 { 00:30:30.934 "params": { 00:30:30.934 "name": "Nvme$subsystem", 00:30:30.934 "trtype": "$TEST_TRANSPORT", 00:30:30.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:30.934 "adrfam": "ipv4", 00:30:30.934 "trsvcid": "$NVMF_PORT", 00:30:30.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:30.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:30.934 "hdgst": ${hdgst:-false}, 00:30:30.934 "ddgst": ${ddgst:-false} 00:30:30.934 }, 00:30:30.934 "method": "bdev_nvme_attach_controller" 00:30:30.934 } 00:30:30.934 EOF 00:30:30.934 )") 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:30.934 13:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:30.934 "params": { 00:30:30.934 "name": "Nvme0", 00:30:30.934 "trtype": "tcp", 00:30:30.934 "traddr": "10.0.0.2", 00:30:30.934 "adrfam": "ipv4", 00:30:30.934 "trsvcid": "4420", 00:30:30.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:30.934 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:30.934 "hdgst": false, 00:30:30.934 "ddgst": false 00:30:30.934 }, 00:30:30.934 "method": "bdev_nvme_attach_controller" 00:30:30.934 }' 00:30:31.191 [2024-11-25 13:29:28.627578] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:30:31.191 [2024-11-25 13:29:28.627692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3303109 ] 00:30:31.191 [2024-11-25 13:29:28.695692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.191 [2024-11-25 13:29:28.755633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.448 Running I/O for 1 seconds... 00:30:32.825 1664.00 IOPS, 104.00 MiB/s 00:30:32.825 Latency(us) 00:30:32.825 [2024-11-25T12:29:30.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.825 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.825 Verification LBA range: start 0x0 length 0x400 00:30:32.825 Nvme0n1 : 1.03 1674.62 104.66 0.00 0.00 37603.51 7475.96 33399.09 00:30:32.825 [2024-11-25T12:29:30.484Z] =================================================================================================================== 00:30:32.825 [2024-11-25T12:29:30.484Z] Total : 1674.62 104.66 0.00 0.00 37603.51 7475.96 33399.09 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.825 rmmod nvme_tcp 00:30:32.825 rmmod nvme_fabrics 00:30:32.825 rmmod nvme_keyring 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3302789 ']' 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3302789 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3302789 ']' 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3302789 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3302789 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3302789' 00:30:32.825 killing process with pid 3302789 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3302789 00:30:32.825 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3302789 00:30:33.084 [2024-11-25 13:29:30.656576] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:33.084 13:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.615 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.615 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:35.615 00:30:35.615 real 0m8.913s 00:30:35.615 user 0m18.180s 00:30:35.615 sys 0m3.754s 00:30:35.615 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.615 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:35.615 ************************************ 00:30:35.615 END TEST nvmf_host_management 00:30:35.615 ************************************ 00:30:35.615 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:35.615 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:35.615 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.615 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:35.615 ************************************ 00:30:35.615 START TEST nvmf_lvol 00:30:35.616 ************************************ 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:35.616 * Looking for test storage... 00:30:35.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:35.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.616 --rc genhtml_branch_coverage=1 00:30:35.616 --rc genhtml_function_coverage=1 00:30:35.616 --rc genhtml_legend=1 00:30:35.616 --rc geninfo_all_blocks=1 00:30:35.616 --rc geninfo_unexecuted_blocks=1 00:30:35.616 00:30:35.616 ' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:35.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.616 --rc genhtml_branch_coverage=1 00:30:35.616 --rc genhtml_function_coverage=1 00:30:35.616 --rc genhtml_legend=1 00:30:35.616 --rc geninfo_all_blocks=1 00:30:35.616 --rc geninfo_unexecuted_blocks=1 00:30:35.616 00:30:35.616 ' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:35.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.616 --rc genhtml_branch_coverage=1 00:30:35.616 --rc genhtml_function_coverage=1 00:30:35.616 --rc genhtml_legend=1 00:30:35.616 --rc geninfo_all_blocks=1 00:30:35.616 --rc geninfo_unexecuted_blocks=1 00:30:35.616 00:30:35.616 ' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:35.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.616 --rc genhtml_branch_coverage=1 00:30:35.616 --rc genhtml_function_coverage=1 00:30:35.616 --rc genhtml_legend=1 00:30:35.616 --rc geninfo_all_blocks=1 00:30:35.616 --rc geninfo_unexecuted_blocks=1 00:30:35.616 00:30:35.616 ' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.616 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.617 13:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:37.516 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:37.516 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:37.516 Found net devices under 0000:09:00.0: cvl_0_0 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:37.516 Found net devices under 0000:09:00.1: cvl_0_1 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.516 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.517 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:30:37.775 00:30:37.775 --- 10.0.0.2 ping statistics --- 00:30:37.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.775 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:30:37.775 00:30:37.775 --- 10.0.0.1 ping statistics --- 00:30:37.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.775 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3305309 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3305309 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3305309 ']' 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.775 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:37.775 [2024-11-25 13:29:35.258938] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:37.775 [2024-11-25 13:29:35.260013] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:30:37.775 [2024-11-25 13:29:35.260064] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.775 [2024-11-25 13:29:35.331437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:37.775 [2024-11-25 13:29:35.391094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.775 [2024-11-25 13:29:35.391145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.775 [2024-11-25 13:29:35.391167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.775 [2024-11-25 13:29:35.391179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.775 [2024-11-25 13:29:35.391189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.775 [2024-11-25 13:29:35.392586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.775 [2024-11-25 13:29:35.392657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.775 [2024-11-25 13:29:35.392661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.034 [2024-11-25 13:29:35.481708] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:38.034 [2024-11-25 13:29:35.481931] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:38.034 [2024-11-25 13:29:35.481945] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:38.034 [2024-11-25 13:29:35.482202] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:38.034 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.034 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:38.034 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.034 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:38.034 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:38.034 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.034 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:38.293 [2024-11-25 13:29:35.777391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.293 13:29:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:38.551 13:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:38.551 13:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:38.808 13:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:38.808 13:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:39.066 13:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:39.323 13:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9a222b98-758d-4ab7-8c1a-4f219f26cfe8 00:30:39.323 13:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9a222b98-758d-4ab7-8c1a-4f219f26cfe8 lvol 20 00:30:39.887 13:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=229eaddb-5b1e-4d3c-988c-846861364a30 00:30:39.887 13:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:39.887 13:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 229eaddb-5b1e-4d3c-988c-846861364a30 00:30:40.145 13:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.402 [2024-11-25 13:29:38.033475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.402 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.966 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3305744 00:30:40.966 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:40.966 13:29:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:41.899 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 229eaddb-5b1e-4d3c-988c-846861364a30 MY_SNAPSHOT 00:30:42.156 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2627dad7-9eb4-41c6-924a-75d84fac0d8d 00:30:42.156 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 229eaddb-5b1e-4d3c-988c-846861364a30 30 00:30:42.413 13:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2627dad7-9eb4-41c6-924a-75d84fac0d8d MY_CLONE 00:30:42.671 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=474c5903-880e-4257-8880-2ddddd885d04 00:30:42.671 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 474c5903-880e-4257-8880-2ddddd885d04 00:30:43.235 13:29:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3305744 00:30:51.337 Initializing NVMe Controllers 00:30:51.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:51.337 Controller IO queue size 128, less than required. 00:30:51.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:51.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:51.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:51.337 Initialization complete. Launching workers. 00:30:51.337 ======================================================== 00:30:51.337 Latency(us) 00:30:51.337 Device Information : IOPS MiB/s Average min max 00:30:51.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10493.50 40.99 12200.77 323.85 77065.38 00:30:51.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10473.90 40.91 12228.12 3756.16 73636.06 00:30:51.337 ======================================================== 00:30:51.337 Total : 20967.40 81.90 12214.43 323.85 77065.38 00:30:51.337 00:30:51.337 13:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.337 13:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 229eaddb-5b1e-4d3c-988c-846861364a30 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9a222b98-758d-4ab7-8c1a-4f219f26cfe8 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.902 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.902 rmmod nvme_tcp 00:30:51.902 rmmod nvme_fabrics 00:30:52.159 rmmod nvme_keyring 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3305309 ']' 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3305309 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3305309 ']' 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3305309 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3305309 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3305309' 00:30:52.159 killing process with pid 3305309 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3305309 00:30:52.159 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3305309 00:30:52.442 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:52.442 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:52.442 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:52.442 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:52.443 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:52.443 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:52.443 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:52.443 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.443 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.443 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.443 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.443 13:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.345 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:54.345 00:30:54.345 real 0m19.173s 00:30:54.345 user 0m56.621s 00:30:54.345 sys 0m7.566s 00:30:54.345 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.345 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:54.345 ************************************ 00:30:54.346 END TEST nvmf_lvol 00:30:54.346 ************************************ 00:30:54.346 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:54.346 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:54.346 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.346 13:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:54.604 ************************************ 00:30:54.604 START TEST nvmf_lvs_grow 00:30:54.604 ************************************ 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:54.604 * Looking for test storage... 00:30:54.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:54.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.604 --rc genhtml_branch_coverage=1 00:30:54.604 --rc genhtml_function_coverage=1 00:30:54.604 --rc genhtml_legend=1 00:30:54.604 --rc geninfo_all_blocks=1 00:30:54.604 --rc geninfo_unexecuted_blocks=1 00:30:54.604 00:30:54.604 ' 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:54.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.604 --rc genhtml_branch_coverage=1 00:30:54.604 --rc genhtml_function_coverage=1 00:30:54.604 --rc genhtml_legend=1 00:30:54.604 --rc geninfo_all_blocks=1 00:30:54.604 --rc geninfo_unexecuted_blocks=1 00:30:54.604 00:30:54.604 ' 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:54.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.604 --rc genhtml_branch_coverage=1 00:30:54.604 --rc genhtml_function_coverage=1 00:30:54.604 --rc genhtml_legend=1 00:30:54.604 --rc geninfo_all_blocks=1 00:30:54.604 --rc geninfo_unexecuted_blocks=1 00:30:54.604 00:30:54.604 ' 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:54.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.604 --rc genhtml_branch_coverage=1 00:30:54.604 --rc genhtml_function_coverage=1 00:30:54.604 --rc genhtml_legend=1 00:30:54.604 --rc geninfo_all_blocks=1 00:30:54.604 --rc geninfo_unexecuted_blocks=1 00:30:54.604 00:30:54.604 ' 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.604 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.605 13:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:57.135 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:57.135 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:57.135 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:57.136 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:57.136 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:57.136 Found net devices under 0000:09:00.0: cvl_0_0 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:57.136 Found net devices under 0000:09:00.1: cvl_0_1 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:57.136 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:57.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:57.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:30:57.137 00:30:57.137 --- 10.0.0.2 ping statistics --- 00:30:57.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.137 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:57.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:57.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:30:57.137 00:30:57.137 --- 10.0.0.1 ping statistics --- 00:30:57.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:57.137 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3309009 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3309009 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3309009 ']' 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:57.137 [2024-11-25 13:29:54.409833] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:57.137 [2024-11-25 13:29:54.410900] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:30:57.137 [2024-11-25 13:29:54.410966] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.137 [2024-11-25 13:29:54.482196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.137 [2024-11-25 13:29:54.538221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.137 [2024-11-25 13:29:54.538273] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.137 [2024-11-25 13:29:54.538308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.137 [2024-11-25 13:29:54.538322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.137 [2024-11-25 13:29:54.538331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.137 [2024-11-25 13:29:54.538928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.137 [2024-11-25 13:29:54.624670] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:57.137 [2024-11-25 13:29:54.624980] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.137 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:57.395 [2024-11-25 13:29:54.935544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:57.395 ************************************ 00:30:57.395 START TEST lvs_grow_clean 00:30:57.395 ************************************ 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:57.395 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:57.396 13:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:57.653 13:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:57.653 13:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:58.219 13:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c72e33fb-5787-4295-b0b7-d601008bf320 00:30:58.219 13:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:30:58.219 13:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:58.219 13:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:58.219 13:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:58.219 13:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c72e33fb-5787-4295-b0b7-d601008bf320 lvol 150 00:30:58.784 13:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bedeba63-3334-4e2d-a4df-d147d1b766c9 00:30:58.784 13:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:58.784 13:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:58.784 [2024-11-25 13:29:56.395449] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:58.784 [2024-11-25 13:29:56.395550] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:58.784 true 00:30:58.784 13:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:30:58.784 13:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:59.042 13:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:59.042 13:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:59.607 13:29:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bedeba63-3334-4e2d-a4df-d147d1b766c9 00:30:59.607 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:59.864 [2024-11-25 13:29:57.475729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.864 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3309444 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3309444 /var/tmp/bdevperf.sock 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3309444 ']' 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:00.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.122 13:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:00.379 [2024-11-25 13:29:57.792184] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:00.379 [2024-11-25 13:29:57.792251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3309444 ] 00:31:00.379 [2024-11-25 13:29:57.856705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.379 [2024-11-25 13:29:57.914010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.379 13:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.379 13:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:00.379 13:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:00.945 Nvme0n1 00:31:00.945 13:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:01.203 [ 00:31:01.203 { 00:31:01.203 "name": "Nvme0n1", 00:31:01.203 "aliases": [ 00:31:01.203 "bedeba63-3334-4e2d-a4df-d147d1b766c9" 00:31:01.203 ], 00:31:01.203 "product_name": "NVMe disk", 00:31:01.203 "block_size": 4096, 00:31:01.203 "num_blocks": 38912, 00:31:01.203 "uuid": "bedeba63-3334-4e2d-a4df-d147d1b766c9", 00:31:01.203 "numa_id": 0, 00:31:01.203 "assigned_rate_limits": { 00:31:01.203 "rw_ios_per_sec": 0, 00:31:01.203 "rw_mbytes_per_sec": 0, 00:31:01.203 "r_mbytes_per_sec": 0, 00:31:01.203 "w_mbytes_per_sec": 0 00:31:01.203 }, 00:31:01.203 "claimed": false, 00:31:01.203 "zoned": false, 00:31:01.203 "supported_io_types": { 00:31:01.203 "read": true, 00:31:01.203 "write": true, 00:31:01.203 "unmap": true, 00:31:01.203 "flush": true, 00:31:01.203 "reset": true, 00:31:01.203 "nvme_admin": true, 00:31:01.203 "nvme_io": true, 00:31:01.203 "nvme_io_md": false, 00:31:01.203 "write_zeroes": true, 00:31:01.203 "zcopy": false, 00:31:01.203 "get_zone_info": false, 00:31:01.203 "zone_management": false, 00:31:01.203 "zone_append": false, 00:31:01.203 "compare": true, 00:31:01.203 "compare_and_write": true, 00:31:01.203 "abort": true, 00:31:01.203 "seek_hole": false, 00:31:01.203 "seek_data": false, 00:31:01.203 "copy": true, 00:31:01.203 "nvme_iov_md": false 00:31:01.203 }, 00:31:01.203 "memory_domains": [ 00:31:01.203 { 00:31:01.203 "dma_device_id": "system", 00:31:01.203 "dma_device_type": 1 00:31:01.203 } 00:31:01.203 ], 00:31:01.203 "driver_specific": { 00:31:01.203 "nvme": [ 00:31:01.203 { 00:31:01.203 "trid": { 00:31:01.203 "trtype": "TCP", 00:31:01.203 "adrfam": "IPv4", 00:31:01.203 "traddr": "10.0.0.2", 00:31:01.203 "trsvcid": "4420", 00:31:01.203 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:01.203 }, 00:31:01.203 "ctrlr_data": { 00:31:01.203 "cntlid": 1, 00:31:01.203 "vendor_id": "0x8086", 00:31:01.203 "model_number": "SPDK bdev Controller", 00:31:01.203 "serial_number": "SPDK0", 00:31:01.203 "firmware_revision": "25.01", 00:31:01.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.203 "oacs": { 00:31:01.203 "security": 0, 00:31:01.203 "format": 0, 00:31:01.203 "firmware": 0, 00:31:01.203 "ns_manage": 0 00:31:01.203 }, 00:31:01.203 "multi_ctrlr": true, 00:31:01.203 "ana_reporting": false 00:31:01.203 }, 00:31:01.203 "vs": { 00:31:01.203 "nvme_version": "1.3" 00:31:01.203 }, 00:31:01.203 "ns_data": { 00:31:01.203 "id": 1, 00:31:01.203 "can_share": true 00:31:01.203 } 00:31:01.203 } 00:31:01.203 ], 00:31:01.203 "mp_policy": "active_passive" 00:31:01.203 } 00:31:01.203 } 00:31:01.203 ] 00:31:01.203 13:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3309528 00:31:01.203 13:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:01.203 13:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:01.203 Running I/O for 10 seconds... 00:31:02.138 Latency(us) 00:31:02.138 [2024-11-25T12:29:59.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.138 Nvme0n1 : 1.00 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:31:02.138 [2024-11-25T12:29:59.797Z] =================================================================================================================== 00:31:02.138 [2024-11-25T12:29:59.797Z] Total : 15621.00 61.02 0.00 0.00 0.00 0.00 0.00 00:31:02.138 00:31:03.071 13:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:03.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.328 Nvme0n1 : 2.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:31:03.328 [2024-11-25T12:30:00.987Z] =================================================================================================================== 00:31:03.328 [2024-11-25T12:30:00.987Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:31:03.328 00:31:03.328 true 00:31:03.328 13:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:03.328 13:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:03.586 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:03.586 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:03.586 13:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3309528 00:31:04.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.151 Nvme0n1 : 3.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:31:04.151 [2024-11-25T12:30:01.810Z] =================================================================================================================== 00:31:04.151 [2024-11-25T12:30:01.810Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:31:04.151 00:31:05.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.522 Nvme0n1 : 4.00 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:31:05.522 [2024-11-25T12:30:03.182Z] =================================================================================================================== 00:31:05.523 [2024-11-25T12:30:03.182Z] Total : 15494.00 60.52 0.00 0.00 0.00 0.00 0.00 00:31:05.523 00:31:06.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.454 Nvme0n1 : 5.00 15506.80 60.57 0.00 0.00 0.00 0.00 0.00 00:31:06.454 [2024-11-25T12:30:04.113Z] =================================================================================================================== 00:31:06.454 [2024-11-25T12:30:04.113Z] Total : 15506.80 60.57 0.00 0.00 0.00 0.00 0.00 00:31:06.454 00:31:07.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.386 Nvme0n1 : 6.00 15557.50 60.77 0.00 0.00 0.00 0.00 0.00 00:31:07.386 [2024-11-25T12:30:05.045Z] =================================================================================================================== 00:31:07.386 [2024-11-25T12:30:05.045Z] Total : 15557.50 60.77 0.00 0.00 0.00 0.00 0.00 00:31:07.386 00:31:08.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.318 Nvme0n1 : 7.00 15584.71 60.88 0.00 0.00 0.00 0.00 0.00 00:31:08.318 [2024-11-25T12:30:05.977Z] =================================================================================================================== 00:31:08.318 [2024-11-25T12:30:05.977Z] Total : 15584.71 60.88 0.00 0.00 0.00 0.00 0.00 00:31:08.318 00:31:09.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.250 Nvme0n1 : 8.00 15605.12 60.96 0.00 0.00 0.00 0.00 0.00 00:31:09.250 [2024-11-25T12:30:06.909Z] =================================================================================================================== 00:31:09.250 [2024-11-25T12:30:06.909Z] Total : 15605.12 60.96 0.00 0.00 0.00 0.00 0.00 00:31:09.250 00:31:10.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:10.183 Nvme0n1 : 9.00 15635.11 61.07 0.00 0.00 0.00 0.00 0.00 00:31:10.183 [2024-11-25T12:30:07.842Z] =================================================================================================================== 00:31:10.183 [2024-11-25T12:30:07.842Z] Total : 15635.11 61.07 0.00 0.00 0.00 0.00 0.00 00:31:10.183 00:31:11.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.555 Nvme0n1 : 10.00 15659.10 61.17 0.00 0.00 0.00 0.00 0.00 00:31:11.555 [2024-11-25T12:30:09.214Z] =================================================================================================================== 00:31:11.555 [2024-11-25T12:30:09.214Z] Total : 15659.10 61.17 0.00 0.00 0.00 0.00 0.00 00:31:11.555 00:31:11.555 00:31:11.556 Latency(us) 00:31:11.556 [2024-11-25T12:30:09.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.556 Nvme0n1 : 10.00 15664.65 61.19 0.00 0.00 8166.73 6456.51 17476.27 00:31:11.556 [2024-11-25T12:30:09.215Z] =================================================================================================================== 00:31:11.556 [2024-11-25T12:30:09.215Z] Total : 15664.65 61.19 0.00 0.00 8166.73 6456.51 17476.27 00:31:11.556 { 00:31:11.556 "results": [ 00:31:11.556 { 00:31:11.556 "job": "Nvme0n1", 00:31:11.556 "core_mask": "0x2", 00:31:11.556 "workload": "randwrite", 00:31:11.556 "status": "finished", 00:31:11.556 "queue_depth": 128, 00:31:11.556 "io_size": 4096, 00:31:11.556 "runtime": 10.004627, 00:31:11.556 "iops": 15664.651965535546, 00:31:11.556 "mibps": 61.190046740373226, 00:31:11.556 "io_failed": 0, 00:31:11.556 "io_timeout": 0, 00:31:11.556 "avg_latency_us": 8166.732382738343, 00:31:11.556 "min_latency_us": 6456.50962962963, 00:31:11.556 "max_latency_us": 17476.266666666666 00:31:11.556 } 00:31:11.556 ], 00:31:11.556 "core_count": 1 00:31:11.556 } 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3309444 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3309444 ']' 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3309444 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3309444 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3309444' 00:31:11.556 killing process with pid 3309444 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3309444 00:31:11.556 Received shutdown signal, test time was about 10.000000 seconds 00:31:11.556 00:31:11.556 Latency(us) 00:31:11.556 [2024-11-25T12:30:09.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.556 [2024-11-25T12:30:09.215Z] =================================================================================================================== 00:31:11.556 [2024-11-25T12:30:09.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:11.556 13:30:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3309444 00:31:11.556 13:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:11.813 13:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:12.081 13:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:12.082 13:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:12.386 13:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:12.386 13:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:12.386 13:30:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:12.643 [2024-11-25 13:30:10.207511] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:12.643 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:12.900 request: 00:31:12.900 { 00:31:12.900 "uuid": "c72e33fb-5787-4295-b0b7-d601008bf320", 00:31:12.900 "method": "bdev_lvol_get_lvstores", 00:31:12.900 "req_id": 1 00:31:12.900 } 00:31:12.900 Got JSON-RPC error response 00:31:12.900 response: 00:31:12.900 { 00:31:12.900 "code": -19, 00:31:12.900 "message": "No such device" 00:31:12.900 } 00:31:12.900 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:12.900 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:12.900 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:12.900 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:12.900 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:13.157 aio_bdev 00:31:13.157 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bedeba63-3334-4e2d-a4df-d147d1b766c9 00:31:13.157 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=bedeba63-3334-4e2d-a4df-d147d1b766c9 00:31:13.157 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:13.157 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:13.157 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:13.157 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:13.157 13:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:13.722 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bedeba63-3334-4e2d-a4df-d147d1b766c9 -t 2000 00:31:13.722 [ 00:31:13.722 { 00:31:13.722 "name": "bedeba63-3334-4e2d-a4df-d147d1b766c9", 00:31:13.722 "aliases": [ 00:31:13.722 "lvs/lvol" 00:31:13.722 ], 00:31:13.722 "product_name": "Logical Volume", 00:31:13.722 "block_size": 4096, 00:31:13.722 "num_blocks": 38912, 00:31:13.722 "uuid": "bedeba63-3334-4e2d-a4df-d147d1b766c9", 00:31:13.722 "assigned_rate_limits": { 00:31:13.722 "rw_ios_per_sec": 0, 00:31:13.722 "rw_mbytes_per_sec": 0, 00:31:13.722 "r_mbytes_per_sec": 0, 00:31:13.722 "w_mbytes_per_sec": 0 00:31:13.722 }, 00:31:13.722 "claimed": false, 00:31:13.722 "zoned": false, 00:31:13.722 "supported_io_types": { 00:31:13.722 "read": true, 00:31:13.722 "write": true, 00:31:13.722 "unmap": true, 00:31:13.722 "flush": false, 00:31:13.722 "reset": true, 00:31:13.722 "nvme_admin": false, 00:31:13.722 "nvme_io": false, 00:31:13.722 "nvme_io_md": false, 00:31:13.722 "write_zeroes": true, 00:31:13.722 "zcopy": false, 00:31:13.722 "get_zone_info": false, 00:31:13.722 "zone_management": false, 00:31:13.722 "zone_append": false, 00:31:13.722 "compare": false, 00:31:13.722 "compare_and_write": false, 00:31:13.722 "abort": false, 00:31:13.722 "seek_hole": true, 00:31:13.722 "seek_data": true, 00:31:13.722 "copy": false, 00:31:13.722 "nvme_iov_md": false 00:31:13.722 }, 00:31:13.722 "driver_specific": { 00:31:13.722 "lvol": { 00:31:13.722 "lvol_store_uuid": "c72e33fb-5787-4295-b0b7-d601008bf320", 00:31:13.722 "base_bdev": "aio_bdev", 00:31:13.722 "thin_provision": false, 00:31:13.722 "num_allocated_clusters": 38, 00:31:13.722 "snapshot": false, 00:31:13.722 "clone": false, 00:31:13.722 "esnap_clone": false 00:31:13.722 } 00:31:13.722 } 00:31:13.722 } 00:31:13.722 ] 00:31:13.722 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:13.722 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:13.722 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:13.980 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:13.980 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:13.980 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:14.545 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:14.545 13:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bedeba63-3334-4e2d-a4df-d147d1b766c9 00:31:14.545 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c72e33fb-5787-4295-b0b7-d601008bf320 00:31:15.111 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:15.111 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:15.368 00:31:15.368 real 0m17.800s 00:31:15.368 user 0m17.300s 00:31:15.368 sys 0m1.816s 00:31:15.368 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.368 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:15.368 ************************************ 00:31:15.368 END TEST lvs_grow_clean 00:31:15.368 ************************************ 00:31:15.368 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:15.368 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:15.368 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.368 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:15.368 ************************************ 00:31:15.368 START TEST lvs_grow_dirty 00:31:15.369 ************************************ 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:15.369 13:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:15.626 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:15.626 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:15.883 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=392f32ba-321b-4c70-a290-4a408b361796 00:31:15.883 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:15.883 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:16.140 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:16.140 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:16.140 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 392f32ba-321b-4c70-a290-4a408b361796 lvol 150 00:31:16.398 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8c2a6fe7-601d-4f89-b47d-7441b0bad25c 00:31:16.398 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:16.398 13:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:16.655 [2024-11-25 13:30:14.211429] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:16.655 [2024-11-25 13:30:14.211530] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:16.655 true 00:31:16.655 13:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:16.655 13:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:16.912 13:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:16.912 13:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:17.169 13:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8c2a6fe7-601d-4f89-b47d-7441b0bad25c 00:31:17.427 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.685 [2024-11-25 13:30:15.259682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.685 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3312107 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3312107 /var/tmp/bdevperf.sock 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3312107 ']' 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:17.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.942 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:17.942 [2024-11-25 13:30:15.587861] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:17.942 [2024-11-25 13:30:15.587947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3312107 ] 00:31:18.200 [2024-11-25 13:30:15.659783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.200 [2024-11-25 13:30:15.720001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.200 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.200 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:18.200 13:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:18.764 Nvme0n1 00:31:18.764 13:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:19.022 [ 00:31:19.022 { 00:31:19.022 "name": "Nvme0n1", 00:31:19.022 "aliases": [ 00:31:19.022 "8c2a6fe7-601d-4f89-b47d-7441b0bad25c" 00:31:19.022 ], 00:31:19.022 "product_name": "NVMe disk", 00:31:19.022 "block_size": 4096, 00:31:19.022 "num_blocks": 38912, 00:31:19.022 "uuid": "8c2a6fe7-601d-4f89-b47d-7441b0bad25c", 00:31:19.022 "numa_id": 0, 00:31:19.022 "assigned_rate_limits": { 00:31:19.022 "rw_ios_per_sec": 0, 00:31:19.022 "rw_mbytes_per_sec": 0, 00:31:19.022 "r_mbytes_per_sec": 0, 00:31:19.022 "w_mbytes_per_sec": 0 00:31:19.022 }, 00:31:19.022 "claimed": false, 00:31:19.022 "zoned": false, 00:31:19.022 "supported_io_types": { 00:31:19.022 "read": true, 00:31:19.022 "write": true, 00:31:19.022 "unmap": true, 00:31:19.022 "flush": true, 00:31:19.022 "reset": true, 00:31:19.022 "nvme_admin": true, 00:31:19.022 "nvme_io": true, 00:31:19.022 "nvme_io_md": false, 00:31:19.022 "write_zeroes": true, 00:31:19.022 "zcopy": false, 00:31:19.022 "get_zone_info": false, 00:31:19.022 "zone_management": false, 00:31:19.022 "zone_append": false, 00:31:19.022 "compare": true, 00:31:19.022 "compare_and_write": true, 00:31:19.022 "abort": true, 00:31:19.022 "seek_hole": false, 00:31:19.022 "seek_data": false, 00:31:19.022 "copy": true, 00:31:19.022 "nvme_iov_md": false 00:31:19.022 }, 00:31:19.022 "memory_domains": [ 00:31:19.022 { 00:31:19.022 "dma_device_id": "system", 00:31:19.022 "dma_device_type": 1 00:31:19.022 } 00:31:19.022 ], 00:31:19.022 "driver_specific": { 00:31:19.022 "nvme": [ 00:31:19.022 { 00:31:19.022 "trid": { 00:31:19.022 "trtype": "TCP", 00:31:19.022 "adrfam": "IPv4", 00:31:19.022 "traddr": "10.0.0.2", 00:31:19.022 "trsvcid": "4420", 00:31:19.022 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:19.022 }, 00:31:19.022 "ctrlr_data": { 00:31:19.022 "cntlid": 1, 00:31:19.022 "vendor_id": "0x8086", 00:31:19.022 "model_number": "SPDK bdev Controller", 00:31:19.022 "serial_number": "SPDK0", 00:31:19.022 "firmware_revision": "25.01", 00:31:19.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.022 "oacs": { 00:31:19.022 "security": 0, 00:31:19.022 "format": 0, 00:31:19.022 "firmware": 0, 00:31:19.022 "ns_manage": 0 00:31:19.022 }, 00:31:19.022 "multi_ctrlr": true, 00:31:19.022 "ana_reporting": false 00:31:19.022 }, 00:31:19.022 "vs": { 00:31:19.022 "nvme_version": "1.3" 00:31:19.022 }, 00:31:19.022 "ns_data": { 00:31:19.022 "id": 1, 00:31:19.022 "can_share": true 00:31:19.022 } 00:31:19.022 } 00:31:19.022 ], 00:31:19.022 "mp_policy": "active_passive" 00:31:19.022 } 00:31:19.022 } 00:31:19.022 ] 00:31:19.022 13:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3312241 00:31:19.022 13:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:19.022 13:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:19.280 Running I/O for 10 seconds... 00:31:20.211 Latency(us) 00:31:20.211 [2024-11-25T12:30:17.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:20.211 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:31:20.211 [2024-11-25T12:30:17.870Z] =================================================================================================================== 00:31:20.211 [2024-11-25T12:30:17.870Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:31:20.211 00:31:21.144 13:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:21.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:21.144 Nvme0n1 : 2.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:31:21.144 [2024-11-25T12:30:18.803Z] =================================================================================================================== 00:31:21.144 [2024-11-25T12:30:18.803Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:31:21.144 00:31:21.401 true 00:31:21.401 13:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:21.401 13:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:21.659 13:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:21.659 13:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:21.659 13:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3312241 00:31:22.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:22.224 Nvme0n1 : 3.00 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:31:22.224 [2024-11-25T12:30:19.883Z] =================================================================================================================== 00:31:22.224 [2024-11-25T12:30:19.883Z] Total : 15070.67 58.87 0.00 0.00 0.00 0.00 0.00 00:31:22.224 00:31:23.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:23.156 Nvme0n1 : 4.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:31:23.156 [2024-11-25T12:30:20.815Z] =================================================================================================================== 00:31:23.156 [2024-11-25T12:30:20.815Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:31:23.156 00:31:24.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:24.087 Nvme0n1 : 5.00 15214.60 59.43 0.00 0.00 0.00 0.00 0.00 00:31:24.087 [2024-11-25T12:30:21.746Z] =================================================================================================================== 00:31:24.087 [2024-11-25T12:30:21.746Z] Total : 15214.60 59.43 0.00 0.00 0.00 0.00 0.00 00:31:24.087 00:31:25.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:25.458 Nvme0n1 : 6.00 15303.50 59.78 0.00 0.00 0.00 0.00 0.00 00:31:25.458 [2024-11-25T12:30:23.117Z] =================================================================================================================== 00:31:25.458 [2024-11-25T12:30:23.117Z] Total : 15303.50 59.78 0.00 0.00 0.00 0.00 0.00 00:31:25.458 00:31:26.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:26.390 Nvme0n1 : 7.00 15385.14 60.10 0.00 0.00 0.00 0.00 0.00 00:31:26.390 [2024-11-25T12:30:24.049Z] =================================================================================================================== 00:31:26.390 [2024-11-25T12:30:24.049Z] Total : 15385.14 60.10 0.00 0.00 0.00 0.00 0.00 00:31:26.390 00:31:27.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:27.322 Nvme0n1 : 8.00 15414.62 60.21 0.00 0.00 0.00 0.00 0.00 00:31:27.322 [2024-11-25T12:30:24.981Z] =================================================================================================================== 00:31:27.322 [2024-11-25T12:30:24.981Z] Total : 15414.62 60.21 0.00 0.00 0.00 0.00 0.00 00:31:27.322 00:31:28.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:28.254 Nvme0n1 : 9.00 15465.78 60.41 0.00 0.00 0.00 0.00 0.00 00:31:28.254 [2024-11-25T12:30:25.913Z] =================================================================================================================== 00:31:28.254 [2024-11-25T12:30:25.913Z] Total : 15465.78 60.41 0.00 0.00 0.00 0.00 0.00 00:31:28.254 00:31:29.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:29.186 Nvme0n1 : 10.00 15500.40 60.55 0.00 0.00 0.00 0.00 0.00 00:31:29.186 [2024-11-25T12:30:26.845Z] =================================================================================================================== 00:31:29.186 [2024-11-25T12:30:26.845Z] Total : 15500.40 60.55 0.00 0.00 0.00 0.00 0.00 00:31:29.186 00:31:29.186 00:31:29.186 Latency(us) 00:31:29.186 [2024-11-25T12:30:26.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:29.186 Nvme0n1 : 10.01 15506.09 60.57 0.00 0.00 8250.31 5485.61 18155.90 00:31:29.186 [2024-11-25T12:30:26.845Z] =================================================================================================================== 00:31:29.186 [2024-11-25T12:30:26.845Z] Total : 15506.09 60.57 0.00 0.00 8250.31 5485.61 18155.90 00:31:29.186 { 00:31:29.186 "results": [ 00:31:29.186 { 00:31:29.186 "job": "Nvme0n1", 00:31:29.186 "core_mask": "0x2", 00:31:29.186 "workload": "randwrite", 00:31:29.186 "status": "finished", 00:31:29.186 "queue_depth": 128, 00:31:29.186 "io_size": 4096, 00:31:29.186 "runtime": 10.008645, 00:31:29.186 "iops": 15506.094980889022, 00:31:29.186 "mibps": 60.57068351909774, 00:31:29.186 "io_failed": 0, 00:31:29.186 "io_timeout": 0, 00:31:29.186 "avg_latency_us": 8250.31459285749, 00:31:29.186 "min_latency_us": 5485.6059259259255, 00:31:29.186 "max_latency_us": 18155.89925925926 00:31:29.186 } 00:31:29.186 ], 00:31:29.186 "core_count": 1 00:31:29.186 } 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3312107 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3312107 ']' 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3312107 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3312107 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3312107' 00:31:29.186 killing process with pid 3312107 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3312107 00:31:29.186 Received shutdown signal, test time was about 10.000000 seconds 00:31:29.186 00:31:29.186 Latency(us) 00:31:29.186 [2024-11-25T12:30:26.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.186 [2024-11-25T12:30:26.845Z] =================================================================================================================== 00:31:29.186 [2024-11-25T12:30:26.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:29.186 13:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3312107 00:31:29.444 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:29.702 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:29.960 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:29.960 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:30.216 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:30.216 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:30.217 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3309009 00:31:30.217 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3309009 00:31:30.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3309009 Killed "${NVMF_APP[@]}" "$@" 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3313560 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3313560 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3313560 ']' 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.474 13:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:30.474 [2024-11-25 13:30:27.956112] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:30.474 [2024-11-25 13:30:27.957252] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:30.474 [2024-11-25 13:30:27.957333] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.474 [2024-11-25 13:30:28.029855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.474 [2024-11-25 13:30:28.087049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.474 [2024-11-25 13:30:28.087114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.474 [2024-11-25 13:30:28.087127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.474 [2024-11-25 13:30:28.087151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.474 [2024-11-25 13:30:28.087162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.474 [2024-11-25 13:30:28.087742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.731 [2024-11-25 13:30:28.175008] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:30.731 [2024-11-25 13:30:28.175335] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:30.731 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:30.731 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:30.731 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:30.731 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:30.731 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:30.731 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.731 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:30.988 [2024-11-25 13:30:28.522817] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:30.988 [2024-11-25 13:30:28.522954] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:30.988 [2024-11-25 13:30:28.523016] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:30.988 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:30.988 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8c2a6fe7-601d-4f89-b47d-7441b0bad25c 00:31:30.988 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8c2a6fe7-601d-4f89-b47d-7441b0bad25c 00:31:30.988 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:30.988 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:30.988 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:30.988 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:30.988 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:31.245 13:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c2a6fe7-601d-4f89-b47d-7441b0bad25c -t 2000 00:31:31.502 [ 00:31:31.502 { 00:31:31.502 "name": "8c2a6fe7-601d-4f89-b47d-7441b0bad25c", 00:31:31.502 "aliases": [ 00:31:31.502 "lvs/lvol" 00:31:31.502 ], 00:31:31.502 "product_name": "Logical Volume", 00:31:31.502 "block_size": 4096, 00:31:31.502 "num_blocks": 38912, 00:31:31.502 "uuid": "8c2a6fe7-601d-4f89-b47d-7441b0bad25c", 00:31:31.502 "assigned_rate_limits": { 00:31:31.502 "rw_ios_per_sec": 0, 00:31:31.502 "rw_mbytes_per_sec": 0, 00:31:31.502 "r_mbytes_per_sec": 0, 00:31:31.502 "w_mbytes_per_sec": 0 00:31:31.502 }, 00:31:31.502 "claimed": false, 00:31:31.502 "zoned": false, 00:31:31.502 "supported_io_types": { 00:31:31.502 "read": true, 00:31:31.502 "write": true, 00:31:31.502 "unmap": true, 00:31:31.502 "flush": false, 00:31:31.502 "reset": true, 00:31:31.502 "nvme_admin": false, 00:31:31.502 "nvme_io": false, 00:31:31.502 "nvme_io_md": false, 00:31:31.502 "write_zeroes": true, 00:31:31.502 "zcopy": false, 00:31:31.502 "get_zone_info": false, 00:31:31.502 "zone_management": false, 00:31:31.502 "zone_append": false, 00:31:31.502 "compare": false, 00:31:31.502 "compare_and_write": false, 00:31:31.502 "abort": false, 00:31:31.502 "seek_hole": true, 00:31:31.502 "seek_data": true, 00:31:31.502 "copy": false, 00:31:31.502 "nvme_iov_md": false 00:31:31.502 }, 00:31:31.502 "driver_specific": { 00:31:31.502 "lvol": { 00:31:31.502 "lvol_store_uuid": "392f32ba-321b-4c70-a290-4a408b361796", 00:31:31.502 "base_bdev": "aio_bdev", 00:31:31.502 "thin_provision": false, 00:31:31.502 "num_allocated_clusters": 38, 00:31:31.502 "snapshot": false, 00:31:31.502 "clone": false, 00:31:31.502 "esnap_clone": false 00:31:31.502 } 00:31:31.502 } 00:31:31.502 } 00:31:31.502 ] 00:31:31.502 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:31.502 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:31.502 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:31.758 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:31.758 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:31.758 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:32.015 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:32.015 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:32.273 [2024-11-25 13:30:29.896255] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:32.273 13:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:32.837 request: 00:31:32.837 { 00:31:32.837 "uuid": "392f32ba-321b-4c70-a290-4a408b361796", 00:31:32.837 "method": "bdev_lvol_get_lvstores", 00:31:32.837 "req_id": 1 00:31:32.837 } 00:31:32.837 Got JSON-RPC error response 00:31:32.837 response: 00:31:32.837 { 00:31:32.837 "code": -19, 00:31:32.837 "message": "No such device" 00:31:32.837 } 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:32.837 aio_bdev 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8c2a6fe7-601d-4f89-b47d-7441b0bad25c 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8c2a6fe7-601d-4f89-b47d-7441b0bad25c 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:32.837 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:33.432 13:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8c2a6fe7-601d-4f89-b47d-7441b0bad25c -t 2000 00:31:33.432 [ 00:31:33.432 { 00:31:33.432 "name": "8c2a6fe7-601d-4f89-b47d-7441b0bad25c", 00:31:33.432 "aliases": [ 00:31:33.432 "lvs/lvol" 00:31:33.432 ], 00:31:33.432 "product_name": "Logical Volume", 00:31:33.432 "block_size": 4096, 00:31:33.432 "num_blocks": 38912, 00:31:33.432 "uuid": "8c2a6fe7-601d-4f89-b47d-7441b0bad25c", 00:31:33.432 "assigned_rate_limits": { 00:31:33.432 "rw_ios_per_sec": 0, 00:31:33.432 "rw_mbytes_per_sec": 0, 00:31:33.432 "r_mbytes_per_sec": 0, 00:31:33.432 "w_mbytes_per_sec": 0 00:31:33.432 }, 00:31:33.432 "claimed": false, 00:31:33.432 "zoned": false, 00:31:33.432 "supported_io_types": { 00:31:33.432 "read": true, 00:31:33.432 "write": true, 00:31:33.432 "unmap": true, 00:31:33.432 "flush": false, 00:31:33.432 "reset": true, 00:31:33.432 "nvme_admin": false, 00:31:33.432 "nvme_io": false, 00:31:33.432 "nvme_io_md": false, 00:31:33.432 "write_zeroes": true, 00:31:33.432 "zcopy": false, 00:31:33.432 "get_zone_info": false, 00:31:33.432 "zone_management": false, 00:31:33.432 "zone_append": false, 00:31:33.432 "compare": false, 00:31:33.432 "compare_and_write": false, 00:31:33.432 "abort": false, 00:31:33.432 "seek_hole": true, 00:31:33.432 "seek_data": true, 00:31:33.432 "copy": false, 00:31:33.432 "nvme_iov_md": false 00:31:33.432 }, 00:31:33.432 "driver_specific": { 00:31:33.432 "lvol": { 00:31:33.432 "lvol_store_uuid": "392f32ba-321b-4c70-a290-4a408b361796", 00:31:33.432 "base_bdev": "aio_bdev", 00:31:33.432 "thin_provision": false, 00:31:33.432 "num_allocated_clusters": 38, 00:31:33.432 "snapshot": false, 00:31:33.432 "clone": false, 00:31:33.432 "esnap_clone": false 00:31:33.432 } 00:31:33.432 } 00:31:33.432 } 00:31:33.432 ] 00:31:33.704 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:33.704 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:33.704 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:33.961 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:33.961 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:33.961 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:34.219 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:34.219 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8c2a6fe7-601d-4f89-b47d-7441b0bad25c 00:31:34.476 13:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 392f32ba-321b-4c70-a290-4a408b361796 00:31:34.733 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:34.991 00:31:34.991 real 0m19.648s 00:31:34.991 user 0m36.778s 00:31:34.991 sys 0m4.608s 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:34.991 ************************************ 00:31:34.991 END TEST lvs_grow_dirty 00:31:34.991 ************************************ 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:34.991 nvmf_trace.0 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:34.991 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:34.991 rmmod nvme_tcp 00:31:34.991 rmmod nvme_fabrics 00:31:34.992 rmmod nvme_keyring 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3313560 ']' 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3313560 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3313560 ']' 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3313560 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3313560 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3313560' 00:31:34.992 killing process with pid 3313560 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3313560 00:31:34.992 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3313560 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.250 13:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.781 13:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.781 00:31:37.781 real 0m42.906s 00:31:37.781 user 0m55.834s 00:31:37.781 sys 0m8.426s 00:31:37.781 13:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.781 13:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:37.781 ************************************ 00:31:37.781 END TEST nvmf_lvs_grow 00:31:37.781 ************************************ 00:31:37.781 13:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:37.781 13:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:37.781 13:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.781 13:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:37.781 ************************************ 00:31:37.781 START TEST nvmf_bdev_io_wait 00:31:37.781 ************************************ 00:31:37.781 13:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:37.781 * Looking for test storage... 00:31:37.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:37.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.781 --rc genhtml_branch_coverage=1 00:31:37.781 --rc genhtml_function_coverage=1 00:31:37.781 --rc genhtml_legend=1 00:31:37.781 --rc geninfo_all_blocks=1 00:31:37.781 --rc geninfo_unexecuted_blocks=1 00:31:37.781 00:31:37.781 ' 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:37.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.781 --rc genhtml_branch_coverage=1 00:31:37.781 --rc genhtml_function_coverage=1 00:31:37.781 --rc genhtml_legend=1 00:31:37.781 --rc geninfo_all_blocks=1 00:31:37.781 --rc geninfo_unexecuted_blocks=1 00:31:37.781 00:31:37.781 ' 00:31:37.781 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:37.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.781 --rc genhtml_branch_coverage=1 00:31:37.781 --rc genhtml_function_coverage=1 00:31:37.781 --rc genhtml_legend=1 00:31:37.781 --rc geninfo_all_blocks=1 00:31:37.781 --rc geninfo_unexecuted_blocks=1 00:31:37.781 00:31:37.781 ' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:37.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.782 --rc genhtml_branch_coverage=1 00:31:37.782 --rc genhtml_function_coverage=1 00:31:37.782 --rc genhtml_legend=1 00:31:37.782 --rc geninfo_all_blocks=1 00:31:37.782 --rc geninfo_unexecuted_blocks=1 00:31:37.782 00:31:37.782 ' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.782 13:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.682 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:39.940 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:39.940 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:39.940 Found net devices under 0000:09:00.0: cvl_0_0 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.940 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:39.941 Found net devices under 0000:09:00.1: cvl_0_1 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:31:39.941 00:31:39.941 --- 10.0.0.2 ping statistics --- 00:31:39.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.941 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:31:39.941 00:31:39.941 --- 10.0.0.1 ping statistics --- 00:31:39.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.941 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3316204 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3316204 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3316204 ']' 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.941 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:39.941 [2024-11-25 13:30:37.567502] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:39.941 [2024-11-25 13:30:37.568603] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:39.941 [2024-11-25 13:30:37.568668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.199 [2024-11-25 13:30:37.642656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:40.199 [2024-11-25 13:30:37.705309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.199 [2024-11-25 13:30:37.705366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.199 [2024-11-25 13:30:37.705381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.199 [2024-11-25 13:30:37.705393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.199 [2024-11-25 13:30:37.705403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.199 [2024-11-25 13:30:37.706993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.199 [2024-11-25 13:30:37.707059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.199 [2024-11-25 13:30:37.707089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.199 [2024-11-25 13:30:37.707092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.199 [2024-11-25 13:30:37.707605] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.199 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.457 [2024-11-25 13:30:37.902054] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:40.457 [2024-11-25 13:30:37.902294] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:40.457 [2024-11-25 13:30:37.903203] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:40.457 [2024-11-25 13:30:37.904061] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.457 [2024-11-25 13:30:37.911796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.457 Malloc0 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.457 [2024-11-25 13:30:37.963967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3316237 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3316238 00:31:40.457 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3316241 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.458 { 00:31:40.458 "params": { 00:31:40.458 "name": "Nvme$subsystem", 00:31:40.458 "trtype": "$TEST_TRANSPORT", 00:31:40.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.458 "adrfam": "ipv4", 00:31:40.458 "trsvcid": "$NVMF_PORT", 00:31:40.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.458 "hdgst": ${hdgst:-false}, 00:31:40.458 "ddgst": ${ddgst:-false} 00:31:40.458 }, 00:31:40.458 "method": "bdev_nvme_attach_controller" 00:31:40.458 } 00:31:40.458 EOF 00:31:40.458 )") 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3316243 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.458 { 00:31:40.458 "params": { 00:31:40.458 "name": "Nvme$subsystem", 00:31:40.458 "trtype": "$TEST_TRANSPORT", 00:31:40.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.458 "adrfam": "ipv4", 00:31:40.458 "trsvcid": "$NVMF_PORT", 00:31:40.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.458 "hdgst": ${hdgst:-false}, 00:31:40.458 "ddgst": ${ddgst:-false} 00:31:40.458 }, 00:31:40.458 "method": "bdev_nvme_attach_controller" 00:31:40.458 } 00:31:40.458 EOF 00:31:40.458 )") 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.458 { 00:31:40.458 "params": { 00:31:40.458 "name": "Nvme$subsystem", 00:31:40.458 "trtype": "$TEST_TRANSPORT", 00:31:40.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.458 "adrfam": "ipv4", 00:31:40.458 "trsvcid": "$NVMF_PORT", 00:31:40.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.458 "hdgst": ${hdgst:-false}, 00:31:40.458 "ddgst": ${ddgst:-false} 00:31:40.458 }, 00:31:40.458 "method": "bdev_nvme_attach_controller" 00:31:40.458 } 00:31:40.458 EOF 00:31:40.458 )") 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.458 { 00:31:40.458 "params": { 00:31:40.458 "name": "Nvme$subsystem", 00:31:40.458 "trtype": "$TEST_TRANSPORT", 00:31:40.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.458 "adrfam": "ipv4", 00:31:40.458 "trsvcid": "$NVMF_PORT", 00:31:40.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.458 "hdgst": ${hdgst:-false}, 00:31:40.458 "ddgst": ${ddgst:-false} 00:31:40.458 }, 00:31:40.458 "method": "bdev_nvme_attach_controller" 00:31:40.458 } 00:31:40.458 EOF 00:31:40.458 )") 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3316237 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.458 "params": { 00:31:40.458 "name": "Nvme1", 00:31:40.458 "trtype": "tcp", 00:31:40.458 "traddr": "10.0.0.2", 00:31:40.458 "adrfam": "ipv4", 00:31:40.458 "trsvcid": "4420", 00:31:40.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.458 "hdgst": false, 00:31:40.458 "ddgst": false 00:31:40.458 }, 00:31:40.458 "method": "bdev_nvme_attach_controller" 00:31:40.458 }' 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.458 "params": { 00:31:40.458 "name": "Nvme1", 00:31:40.458 "trtype": "tcp", 00:31:40.458 "traddr": "10.0.0.2", 00:31:40.458 "adrfam": "ipv4", 00:31:40.458 "trsvcid": "4420", 00:31:40.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.458 "hdgst": false, 00:31:40.458 "ddgst": false 00:31:40.458 }, 00:31:40.458 "method": "bdev_nvme_attach_controller" 00:31:40.458 }' 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.458 "params": { 00:31:40.458 "name": "Nvme1", 00:31:40.458 "trtype": "tcp", 00:31:40.458 "traddr": "10.0.0.2", 00:31:40.458 "adrfam": "ipv4", 00:31:40.458 "trsvcid": "4420", 00:31:40.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.458 "hdgst": false, 00:31:40.458 "ddgst": false 00:31:40.458 }, 00:31:40.458 "method": "bdev_nvme_attach_controller" 00:31:40.458 }' 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:40.458 13:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.458 "params": { 00:31:40.458 "name": "Nvme1", 00:31:40.458 "trtype": "tcp", 00:31:40.458 "traddr": "10.0.0.2", 00:31:40.458 "adrfam": "ipv4", 00:31:40.458 "trsvcid": "4420", 00:31:40.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.458 "hdgst": false, 00:31:40.458 "ddgst": false 00:31:40.458 }, 00:31:40.458 "method": "bdev_nvme_attach_controller" 00:31:40.458 }' 00:31:40.458 [2024-11-25 13:30:38.015808] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:40.458 [2024-11-25 13:30:38.015806] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:40.458 [2024-11-25 13:30:38.015806] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:40.458 [2024-11-25 13:30:38.015838] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:40.458 [2024-11-25 13:30:38.015895] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-25 13:30:38.015896] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-25 13:30:38.015905] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:40.458 --proc-type=auto ] 00:31:40.458 --proc-type=auto ] 00:31:40.459 [2024-11-25 13:30:38.015928] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:40.715 [2024-11-25 13:30:38.204326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.715 [2024-11-25 13:30:38.259740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:40.715 [2024-11-25 13:30:38.307823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.715 [2024-11-25 13:30:38.363377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:40.973 [2024-11-25 13:30:38.408551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.973 [2024-11-25 13:30:38.466447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:40.973 [2024-11-25 13:30:38.485916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.973 [2024-11-25 13:30:38.539908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:40.973 Running I/O for 1 seconds... 00:31:41.230 Running I/O for 1 seconds... 00:31:41.230 Running I/O for 1 seconds... 00:31:41.230 Running I/O for 1 seconds... 00:31:42.160 172784.00 IOPS, 674.94 MiB/s 00:31:42.160 Latency(us) 00:31:42.160 [2024-11-25T12:30:39.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.160 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:42.160 Nvme1n1 : 1.00 172440.57 673.60 0.00 0.00 738.15 321.61 1966.08 00:31:42.160 [2024-11-25T12:30:39.819Z] =================================================================================================================== 00:31:42.160 [2024-11-25T12:30:39.819Z] Total : 172440.57 673.60 0.00 0.00 738.15 321.61 1966.08 00:31:42.160 6461.00 IOPS, 25.24 MiB/s 00:31:42.160 Latency(us) 00:31:42.160 [2024-11-25T12:30:39.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.160 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:42.160 Nvme1n1 : 1.02 6451.43 25.20 0.00 0.00 19638.34 4004.98 32428.18 00:31:42.160 [2024-11-25T12:30:39.819Z] =================================================================================================================== 00:31:42.160 [2024-11-25T12:30:39.819Z] Total : 6451.43 25.20 0.00 0.00 19638.34 4004.98 32428.18 00:31:42.160 9410.00 IOPS, 36.76 MiB/s 00:31:42.160 Latency(us) 00:31:42.160 [2024-11-25T12:30:39.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.160 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:42.160 Nvme1n1 : 1.01 9461.98 36.96 0.00 0.00 13464.30 2281.62 18544.26 00:31:42.160 [2024-11-25T12:30:39.819Z] =================================================================================================================== 00:31:42.160 [2024-11-25T12:30:39.819Z] Total : 9461.98 36.96 0.00 0.00 13464.30 2281.62 18544.26 00:31:42.160 6329.00 IOPS, 24.72 MiB/s 00:31:42.160 Latency(us) 00:31:42.160 [2024-11-25T12:30:39.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.160 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:42.160 Nvme1n1 : 1.01 6462.29 25.24 0.00 0.00 19753.99 3956.43 42525.58 00:31:42.160 [2024-11-25T12:30:39.819Z] =================================================================================================================== 00:31:42.160 [2024-11-25T12:30:39.819Z] Total : 6462.29 25.24 0.00 0.00 19753.99 3956.43 42525.58 00:31:42.417 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3316238 00:31:42.417 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3316241 00:31:42.417 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3316243 00:31:42.417 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.417 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.417 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:42.417 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.417 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:42.418 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:42.418 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:42.418 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:42.418 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:42.418 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:42.418 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:42.418 13:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:42.418 rmmod nvme_tcp 00:31:42.418 rmmod nvme_fabrics 00:31:42.418 rmmod nvme_keyring 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3316204 ']' 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3316204 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3316204 ']' 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3316204 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3316204 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3316204' 00:31:42.418 killing process with pid 3316204 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3316204 00:31:42.418 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3316204 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.677 13:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.211 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.212 00:31:45.212 real 0m7.339s 00:31:45.212 user 0m14.468s 00:31:45.212 sys 0m3.993s 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.212 ************************************ 00:31:45.212 END TEST nvmf_bdev_io_wait 00:31:45.212 ************************************ 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:45.212 ************************************ 00:31:45.212 START TEST nvmf_queue_depth 00:31:45.212 ************************************ 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:45.212 * Looking for test storage... 00:31:45.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.212 --rc genhtml_branch_coverage=1 00:31:45.212 --rc genhtml_function_coverage=1 00:31:45.212 --rc genhtml_legend=1 00:31:45.212 --rc geninfo_all_blocks=1 00:31:45.212 --rc geninfo_unexecuted_blocks=1 00:31:45.212 00:31:45.212 ' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.212 --rc genhtml_branch_coverage=1 00:31:45.212 --rc genhtml_function_coverage=1 00:31:45.212 --rc genhtml_legend=1 00:31:45.212 --rc geninfo_all_blocks=1 00:31:45.212 --rc geninfo_unexecuted_blocks=1 00:31:45.212 00:31:45.212 ' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.212 --rc genhtml_branch_coverage=1 00:31:45.212 --rc genhtml_function_coverage=1 00:31:45.212 --rc genhtml_legend=1 00:31:45.212 --rc geninfo_all_blocks=1 00:31:45.212 --rc geninfo_unexecuted_blocks=1 00:31:45.212 00:31:45.212 ' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:45.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.212 --rc genhtml_branch_coverage=1 00:31:45.212 --rc genhtml_function_coverage=1 00:31:45.212 --rc genhtml_legend=1 00:31:45.212 --rc geninfo_all_blocks=1 00:31:45.212 --rc geninfo_unexecuted_blocks=1 00:31:45.212 00:31:45.212 ' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.212 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:45.213 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:45.213 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.213 13:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:47.109 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:47.109 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:47.109 Found net devices under 0000:09:00.0: cvl_0_0 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.109 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:47.110 Found net devices under 0000:09:00.1: cvl_0_1 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:47.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:31:47.110 00:31:47.110 --- 10.0.0.2 ping statistics --- 00:31:47.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.110 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:31:47.110 00:31:47.110 --- 10.0.0.1 ping statistics --- 00:31:47.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.110 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3318456 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3318456 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3318456 ']' 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.110 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.110 [2024-11-25 13:30:44.708318] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:47.110 [2024-11-25 13:30:44.709396] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:47.110 [2024-11-25 13:30:44.709467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.369 [2024-11-25 13:30:44.784689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.369 [2024-11-25 13:30:44.843062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:47.369 [2024-11-25 13:30:44.843129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:47.369 [2024-11-25 13:30:44.843154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:47.369 [2024-11-25 13:30:44.843165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:47.369 [2024-11-25 13:30:44.843175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:47.369 [2024-11-25 13:30:44.843770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.369 [2024-11-25 13:30:44.928878] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:47.369 [2024-11-25 13:30:44.929177] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.369 [2024-11-25 13:30:44.988431] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.369 13:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.627 Malloc0 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.627 [2024-11-25 13:30:45.052493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3318483 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:47.627 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:47.628 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3318483 /var/tmp/bdevperf.sock 00:31:47.628 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3318483 ']' 00:31:47.628 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:47.628 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:47.628 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:47.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:47.628 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:47.628 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.628 [2024-11-25 13:30:45.098867] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:31:47.628 [2024-11-25 13:30:45.098929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318483 ] 00:31:47.628 [2024-11-25 13:30:45.164461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.628 [2024-11-25 13:30:45.221884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.885 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:47.885 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:47.885 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:47.885 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.885 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:47.885 NVMe0n1 00:31:47.885 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.885 13:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:48.143 Running I/O for 10 seconds... 00:31:50.442 8192.00 IOPS, 32.00 MiB/s [2024-11-25T12:30:49.030Z] 8394.50 IOPS, 32.79 MiB/s [2024-11-25T12:30:49.960Z] 8528.00 IOPS, 33.31 MiB/s [2024-11-25T12:30:50.890Z] 8503.25 IOPS, 33.22 MiB/s [2024-11-25T12:30:51.820Z] 8582.20 IOPS, 33.52 MiB/s [2024-11-25T12:30:52.749Z] 8536.33 IOPS, 33.35 MiB/s [2024-11-25T12:30:53.742Z] 8608.86 IOPS, 33.63 MiB/s [2024-11-25T12:30:55.111Z] 8587.25 IOPS, 33.54 MiB/s [2024-11-25T12:30:56.044Z] 8639.56 IOPS, 33.75 MiB/s [2024-11-25T12:30:56.044Z] 8635.60 IOPS, 33.73 MiB/s 00:31:58.385 Latency(us) 00:31:58.385 [2024-11-25T12:30:56.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.385 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:58.385 Verification LBA range: start 0x0 length 0x4000 00:31:58.385 NVMe0n1 : 10.06 8672.26 33.88 0.00 0.00 117551.27 11650.84 69128.34 00:31:58.385 [2024-11-25T12:30:56.044Z] =================================================================================================================== 00:31:58.385 [2024-11-25T12:30:56.044Z] Total : 8672.26 33.88 0.00 0.00 117551.27 11650.84 69128.34 00:31:58.385 { 00:31:58.385 "results": [ 00:31:58.385 { 00:31:58.385 "job": "NVMe0n1", 00:31:58.385 "core_mask": "0x1", 00:31:58.385 "workload": "verify", 00:31:58.385 "status": "finished", 00:31:58.385 "verify_range": { 00:31:58.385 "start": 0, 00:31:58.385 "length": 16384 00:31:58.385 }, 00:31:58.385 "queue_depth": 1024, 00:31:58.385 "io_size": 4096, 00:31:58.385 "runtime": 10.064733, 00:31:58.385 "iops": 8672.261847383334, 00:31:58.385 "mibps": 33.87602284134115, 00:31:58.385 "io_failed": 0, 00:31:58.385 "io_timeout": 0, 00:31:58.385 "avg_latency_us": 117551.2660346557, 00:31:58.385 "min_latency_us": 11650.844444444445, 00:31:58.385 "max_latency_us": 69128.34370370371 00:31:58.385 } 00:31:58.385 ], 00:31:58.385 "core_count": 1 00:31:58.385 } 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3318483 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3318483 ']' 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3318483 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3318483 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3318483' 00:31:58.385 killing process with pid 3318483 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3318483 00:31:58.385 Received shutdown signal, test time was about 10.000000 seconds 00:31:58.385 00:31:58.385 Latency(us) 00:31:58.385 [2024-11-25T12:30:56.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.385 [2024-11-25T12:30:56.044Z] =================================================================================================================== 00:31:58.385 [2024-11-25T12:30:56.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:58.385 13:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3318483 00:31:58.385 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:58.385 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:58.385 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:58.385 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:58.385 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:58.385 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:58.385 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:58.385 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:58.385 rmmod nvme_tcp 00:31:58.643 rmmod nvme_fabrics 00:31:58.643 rmmod nvme_keyring 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3318456 ']' 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3318456 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3318456 ']' 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3318456 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3318456 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3318456' 00:31:58.643 killing process with pid 3318456 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3318456 00:31:58.643 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3318456 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.900 13:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.820 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:00.820 00:32:00.820 real 0m16.055s 00:32:00.820 user 0m22.413s 00:32:00.820 sys 0m3.262s 00:32:00.820 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:00.820 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:00.820 ************************************ 00:32:00.820 END TEST nvmf_queue_depth 00:32:00.820 ************************************ 00:32:00.820 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:00.820 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:00.820 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:00.820 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:00.820 ************************************ 00:32:00.820 START TEST nvmf_target_multipath 00:32:00.820 ************************************ 00:32:00.820 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:01.079 * Looking for test storage... 00:32:01.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:01.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.079 --rc genhtml_branch_coverage=1 00:32:01.079 --rc genhtml_function_coverage=1 00:32:01.079 --rc genhtml_legend=1 00:32:01.079 --rc geninfo_all_blocks=1 00:32:01.079 --rc geninfo_unexecuted_blocks=1 00:32:01.079 00:32:01.079 ' 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:01.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.079 --rc genhtml_branch_coverage=1 00:32:01.079 --rc genhtml_function_coverage=1 00:32:01.079 --rc genhtml_legend=1 00:32:01.079 --rc geninfo_all_blocks=1 00:32:01.079 --rc geninfo_unexecuted_blocks=1 00:32:01.079 00:32:01.079 ' 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:01.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.079 --rc genhtml_branch_coverage=1 00:32:01.079 --rc genhtml_function_coverage=1 00:32:01.079 --rc genhtml_legend=1 00:32:01.079 --rc geninfo_all_blocks=1 00:32:01.079 --rc geninfo_unexecuted_blocks=1 00:32:01.079 00:32:01.079 ' 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:01.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.079 --rc genhtml_branch_coverage=1 00:32:01.079 --rc genhtml_function_coverage=1 00:32:01.079 --rc genhtml_legend=1 00:32:01.079 --rc geninfo_all_blocks=1 00:32:01.079 --rc geninfo_unexecuted_blocks=1 00:32:01.079 00:32:01.079 ' 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.079 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.080 13:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.615 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:03.616 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:03.616 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:03.616 Found net devices under 0000:09:00.0: cvl_0_0 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:03.616 Found net devices under 0000:09:00.1: cvl_0_1 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:03.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:32:03.616 00:32:03.616 --- 10.0.0.2 ping statistics --- 00:32:03.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.616 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:32:03.616 00:32:03.616 --- 10.0.0.1 ping statistics --- 00:32:03.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.616 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:03.616 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:03.617 only one NIC for nvmf test 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:03.617 rmmod nvme_tcp 00:32:03.617 rmmod nvme_fabrics 00:32:03.617 rmmod nvme_keyring 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:03.617 13:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:03.617 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:03.617 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:03.617 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.617 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:03.617 13:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:05.551 00:32:05.551 real 0m4.605s 00:32:05.551 user 0m0.914s 00:32:05.551 sys 0m1.705s 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:05.551 ************************************ 00:32:05.551 END TEST nvmf_target_multipath 00:32:05.551 ************************************ 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:05.551 ************************************ 00:32:05.551 START TEST nvmf_zcopy 00:32:05.551 ************************************ 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:05.551 * Looking for test storage... 00:32:05.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:32:05.551 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:05.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.809 --rc genhtml_branch_coverage=1 00:32:05.809 --rc genhtml_function_coverage=1 00:32:05.809 --rc genhtml_legend=1 00:32:05.809 --rc geninfo_all_blocks=1 00:32:05.809 --rc geninfo_unexecuted_blocks=1 00:32:05.809 00:32:05.809 ' 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:05.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.809 --rc genhtml_branch_coverage=1 00:32:05.809 --rc genhtml_function_coverage=1 00:32:05.809 --rc genhtml_legend=1 00:32:05.809 --rc geninfo_all_blocks=1 00:32:05.809 --rc geninfo_unexecuted_blocks=1 00:32:05.809 00:32:05.809 ' 00:32:05.809 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:05.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.809 --rc genhtml_branch_coverage=1 00:32:05.809 --rc genhtml_function_coverage=1 00:32:05.809 --rc genhtml_legend=1 00:32:05.809 --rc geninfo_all_blocks=1 00:32:05.809 --rc geninfo_unexecuted_blocks=1 00:32:05.809 00:32:05.809 ' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:05.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:05.810 --rc genhtml_branch_coverage=1 00:32:05.810 --rc genhtml_function_coverage=1 00:32:05.810 --rc genhtml_legend=1 00:32:05.810 --rc geninfo_all_blocks=1 00:32:05.810 --rc geninfo_unexecuted_blocks=1 00:32:05.810 00:32:05.810 ' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:05.810 13:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:07.710 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:07.711 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:07.711 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:07.711 Found net devices under 0000:09:00.0: cvl_0_0 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:07.711 Found net devices under 0000:09:00.1: cvl_0_1 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.711 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:07.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:32:07.969 00:32:07.969 --- 10.0.0.2 ping statistics --- 00:32:07.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.969 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:32:07.969 00:32:07.969 --- 10.0.0.1 ping statistics --- 00:32:07.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.969 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:07.969 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3323663 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3323663 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3323663 ']' 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:07.970 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.970 [2024-11-25 13:31:05.514635] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:07.970 [2024-11-25 13:31:05.515729] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:32:07.970 [2024-11-25 13:31:05.515782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:07.970 [2024-11-25 13:31:05.587077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.228 [2024-11-25 13:31:05.644283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:08.228 [2024-11-25 13:31:05.644336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:08.228 [2024-11-25 13:31:05.644351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:08.228 [2024-11-25 13:31:05.644363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:08.228 [2024-11-25 13:31:05.644372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:08.228 [2024-11-25 13:31:05.644995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.228 [2024-11-25 13:31:05.742655] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:08.228 [2024-11-25 13:31:05.742937] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.228 [2024-11-25 13:31:05.797574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.228 [2024-11-25 13:31:05.813773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.228 malloc0 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:08.228 { 00:32:08.228 "params": { 00:32:08.228 "name": "Nvme$subsystem", 00:32:08.228 "trtype": "$TEST_TRANSPORT", 00:32:08.228 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.228 "adrfam": "ipv4", 00:32:08.228 "trsvcid": "$NVMF_PORT", 00:32:08.228 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.228 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.228 "hdgst": ${hdgst:-false}, 00:32:08.228 "ddgst": ${ddgst:-false} 00:32:08.228 }, 00:32:08.228 "method": "bdev_nvme_attach_controller" 00:32:08.228 } 00:32:08.228 EOF 00:32:08.228 )") 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:08.228 13:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:08.228 "params": { 00:32:08.228 "name": "Nvme1", 00:32:08.228 "trtype": "tcp", 00:32:08.228 "traddr": "10.0.0.2", 00:32:08.228 "adrfam": "ipv4", 00:32:08.228 "trsvcid": "4420", 00:32:08.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:08.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:08.228 "hdgst": false, 00:32:08.228 "ddgst": false 00:32:08.228 }, 00:32:08.228 "method": "bdev_nvme_attach_controller" 00:32:08.228 }' 00:32:08.485 [2024-11-25 13:31:05.893207] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:32:08.485 [2024-11-25 13:31:05.893271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3323685 ] 00:32:08.485 [2024-11-25 13:31:05.959264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.485 [2024-11-25 13:31:06.019628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.742 Running I/O for 10 seconds... 00:32:10.603 5607.00 IOPS, 43.80 MiB/s [2024-11-25T12:31:09.629Z] 5673.50 IOPS, 44.32 MiB/s [2024-11-25T12:31:10.560Z] 5674.00 IOPS, 44.33 MiB/s [2024-11-25T12:31:11.491Z] 5679.25 IOPS, 44.37 MiB/s [2024-11-25T12:31:12.422Z] 5687.40 IOPS, 44.43 MiB/s [2024-11-25T12:31:13.353Z] 5686.33 IOPS, 44.42 MiB/s [2024-11-25T12:31:14.286Z] 5690.43 IOPS, 44.46 MiB/s [2024-11-25T12:31:15.657Z] 5693.00 IOPS, 44.48 MiB/s [2024-11-25T12:31:16.589Z] 5696.44 IOPS, 44.50 MiB/s [2024-11-25T12:31:16.589Z] 5700.80 IOPS, 44.54 MiB/s 00:32:18.930 Latency(us) 00:32:18.930 [2024-11-25T12:31:16.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.930 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:18.930 Verification LBA range: start 0x0 length 0x1000 00:32:18.930 Nvme1n1 : 10.02 5701.01 44.54 0.00 0.00 22384.94 4247.70 29903.83 00:32:18.930 [2024-11-25T12:31:16.589Z] =================================================================================================================== 00:32:18.930 [2024-11-25T12:31:16.589Z] Total : 5701.01 44.54 0.00 0.00 22384.94 4247.70 29903.83 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3324942 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:18.930 { 00:32:18.930 "params": { 00:32:18.930 "name": "Nvme$subsystem", 00:32:18.930 "trtype": "$TEST_TRANSPORT", 00:32:18.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:18.930 "adrfam": "ipv4", 00:32:18.930 "trsvcid": "$NVMF_PORT", 00:32:18.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:18.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:18.930 "hdgst": ${hdgst:-false}, 00:32:18.930 "ddgst": ${ddgst:-false} 00:32:18.930 }, 00:32:18.930 "method": "bdev_nvme_attach_controller" 00:32:18.930 } 00:32:18.930 EOF 00:32:18.930 )") 00:32:18.930 [2024-11-25 13:31:16.473550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.930 [2024-11-25 13:31:16.473605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:18.930 13:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:18.930 "params": { 00:32:18.930 "name": "Nvme1", 00:32:18.930 "trtype": "tcp", 00:32:18.930 "traddr": "10.0.0.2", 00:32:18.930 "adrfam": "ipv4", 00:32:18.930 "trsvcid": "4420", 00:32:18.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:18.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:18.930 "hdgst": false, 00:32:18.930 "ddgst": false 00:32:18.930 }, 00:32:18.930 "method": "bdev_nvme_attach_controller" 00:32:18.930 }' 00:32:18.930 [2024-11-25 13:31:16.481484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.930 [2024-11-25 13:31:16.481509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.930 [2024-11-25 13:31:16.489483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.930 [2024-11-25 13:31:16.489506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.930 [2024-11-25 13:31:16.497479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.930 [2024-11-25 13:31:16.497500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.930 [2024-11-25 13:31:16.505484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.930 [2024-11-25 13:31:16.505507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.930 [2024-11-25 13:31:16.513477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.930 [2024-11-25 13:31:16.513499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.930 [2024-11-25 13:31:16.513559] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:32:18.931 [2024-11-25 13:31:16.513631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3324942 ] 00:32:18.931 [2024-11-25 13:31:16.521479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.521509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.931 [2024-11-25 13:31:16.529481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.529503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.931 [2024-11-25 13:31:16.537479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.537508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.931 [2024-11-25 13:31:16.545478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.545507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.931 [2024-11-25 13:31:16.553480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.553510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.931 [2024-11-25 13:31:16.561481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.561503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.931 [2024-11-25 13:31:16.569481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.569503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.931 [2024-11-25 13:31:16.577478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.577499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:18.931 [2024-11-25 13:31:16.582391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.931 [2024-11-25 13:31:16.585481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:18.931 [2024-11-25 13:31:16.585509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.593519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.593550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.601509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.601538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.609476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.609497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.617477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.617498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.625477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.625497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.633476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.633497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.641476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.641496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.644854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.190 [2024-11-25 13:31:16.649475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.649496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.657481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.657502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.665508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.665537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.673506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.673536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.681506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.681536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.689505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.689537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.697504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.697535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.705507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.705539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.713480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.713501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.721500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.721546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.729508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.729540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.737511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.737546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.745480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.745501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.753490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.753511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.761494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.761519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.769509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.769533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.777482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.777506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.785482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.785505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.793478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.793500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.801476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.801497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.809477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.809498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.817483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.817503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.825484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.825506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.833502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.833525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.190 [2024-11-25 13:31:16.841486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.190 [2024-11-25 13:31:16.841510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.849478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.849501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.857484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.857510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.865480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.865504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 Running I/O for 5 seconds... 00:32:19.450 [2024-11-25 13:31:16.883014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.883041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.899991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.900028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.914896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.914923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.924885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.924911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.937197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.937237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.948182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.948207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.961065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.961094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.971226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.971251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.982678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.982703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:16.992116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:16.992143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.007256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.007297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.017228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.017270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.029125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.029149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.039535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.039561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.054608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.054635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.064046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.064084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.076152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.076177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.089411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.089438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.450 [2024-11-25 13:31:17.099128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.450 [2024-11-25 13:31:17.099152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.110859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.110887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.125651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.125700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.135408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.135435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.147208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.147233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.163750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.163777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.173494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.173518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.185659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.185684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.196540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.196567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.208948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.208975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.218466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.218493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.230227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.230251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.240083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.240108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.255650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.255676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.265406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.265433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.277475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.277502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.288184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.288210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.301325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.301353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.310890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.310914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.323042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.323081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.339204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.339229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.348802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.348834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.360800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.360825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.375225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.375251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.763 [2024-11-25 13:31:17.393926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.763 [2024-11-25 13:31:17.393954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.403671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.403698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.415310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.415351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.430826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.430867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.440113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.440152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.456467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.456492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.466630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.466654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.483015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.483041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.492979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.493006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.507259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.507298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.517389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.517417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.529212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.529237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.540316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.043 [2024-11-25 13:31:17.540356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.043 [2024-11-25 13:31:17.552804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.552830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.568071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.568097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.583840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.583868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.593684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.593708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.605691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.605715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.616471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.616499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.629491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.629519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.639352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.639379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.651122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.651148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.662141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.662165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.678967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.679005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.044 [2024-11-25 13:31:17.688439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.044 [2024-11-25 13:31:17.688466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.702064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.702091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.712081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.712105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.727649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.727689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.737758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.737782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.749793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.749833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.760677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.760702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.774199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.774223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.783999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.784024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.797937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.797962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.807966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.807991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.822822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.822846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.832661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.832686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.301 [2024-11-25 13:31:17.847069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.301 [2024-11-25 13:31:17.847094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 [2024-11-25 13:31:17.863864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.863904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 11537.00 IOPS, 90.13 MiB/s [2024-11-25T12:31:17.961Z] [2024-11-25 13:31:17.873606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.873631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 [2024-11-25 13:31:17.885510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.885536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 [2024-11-25 13:31:17.896706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.896729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 [2024-11-25 13:31:17.910009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.910050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 [2024-11-25 13:31:17.919999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.920023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 [2024-11-25 13:31:17.934634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.934673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 [2024-11-25 13:31:17.943677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.943702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.302 [2024-11-25 13:31:17.955860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.302 [2024-11-25 13:31:17.955901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:17.971327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:17.971352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:17.980836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:17.980860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:17.992766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:17.992790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.006550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.006578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.015821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.015845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.027495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.027522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.043952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.043978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.061739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.061764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.072536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.072563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.085568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.085610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.094896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.094936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.106875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.106900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.121808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.121848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.131671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.131696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.146497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.146523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.157179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.157203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.170480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.170509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.179798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.179822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.191911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.191937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.207240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.207266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.559 [2024-11-25 13:31:18.216990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.559 [2024-11-25 13:31:18.217029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.228908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.228932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.241761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.241802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.251265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.251314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.263263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.263288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.278622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.278655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.287705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.287730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.299653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.299680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.315386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.315414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.324352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.324378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.339004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.339045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.817 [2024-11-25 13:31:18.348547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.817 [2024-11-25 13:31:18.348574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.362880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.362905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.372233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.372258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.386888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.386912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.396325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.396352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.411875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.411901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.427441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.427468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.436766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.436791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.448519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.448545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.461233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.461259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.818 [2024-11-25 13:31:18.470692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.818 [2024-11-25 13:31:18.470732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.482688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.482714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.493077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.493102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.504018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.504052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.518243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.518269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.527750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.527774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.539717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.539742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.555525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.555552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.564870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.564895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.576662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.576686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.589020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.589047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.598691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.598716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.610602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.610628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.621149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.621173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.631965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.631989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.644736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.644764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.654462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.654490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.666207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.666233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.677014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.677056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.690331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.690358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.076 [2024-11-25 13:31:18.699846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.076 [2024-11-25 13:31:18.699873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.077 [2024-11-25 13:31:18.714691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.077 [2024-11-25 13:31:18.714717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.077 [2024-11-25 13:31:18.725228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.077 [2024-11-25 13:31:18.725260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.334 [2024-11-25 13:31:18.735406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.334 [2024-11-25 13:31:18.735433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.334 [2024-11-25 13:31:18.750856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.334 [2024-11-25 13:31:18.750891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.334 [2024-11-25 13:31:18.760671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.334 [2024-11-25 13:31:18.760696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.334 [2024-11-25 13:31:18.774619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.334 [2024-11-25 13:31:18.774660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.334 [2024-11-25 13:31:18.784257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.334 [2024-11-25 13:31:18.784281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.334 [2024-11-25 13:31:18.798607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.334 [2024-11-25 13:31:18.798648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.808162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.808188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.822244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.822271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.831734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.831759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.843422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.843463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.856712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.856739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.866555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.866582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 11616.50 IOPS, 90.75 MiB/s [2024-11-25T12:31:18.994Z] [2024-11-25 13:31:18.878378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.878405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.889163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.889187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.899870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.899895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.914194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.914220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.923772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.923797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.938528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.938555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.948738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.948763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.960401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.960427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.973009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.973035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.335 [2024-11-25 13:31:18.982605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.335 [2024-11-25 13:31:18.982630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:18.994691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:18.994718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.005930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.005954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.017199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.017223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.029756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.029783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.039017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.039043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.055232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.055257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.073763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.073789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.084018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.084044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.098387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.098413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.108907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.108932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.121253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.121278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.132425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.132451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.147643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.147669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.157403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.157429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.169559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.169612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.180653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.180677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.193645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.193672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.203381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.203408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.215481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.215509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.231813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.231837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.592 [2024-11-25 13:31:19.247934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.592 [2024-11-25 13:31:19.247975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.265558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.265586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.275662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.275686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.289364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.289391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.298957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.298982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.315211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.315238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.325213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.325238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.337035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.337061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.347993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.348017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.362623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.362648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.372562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.372612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.386212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.386251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.395510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.395536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.406999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.407022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.423354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.423382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.433520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.433545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.445601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.445627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.456820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.456845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.469989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.470030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.479166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.479190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.491100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.491125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.849 [2024-11-25 13:31:19.506663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.849 [2024-11-25 13:31:19.506688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.105 [2024-11-25 13:31:19.517316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.105 [2024-11-25 13:31:19.517355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.105 [2024-11-25 13:31:19.529024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.105 [2024-11-25 13:31:19.529048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.105 [2024-11-25 13:31:19.539897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.105 [2024-11-25 13:31:19.539921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.105 [2024-11-25 13:31:19.554907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.105 [2024-11-25 13:31:19.554931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.105 [2024-11-25 13:31:19.564899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.105 [2024-11-25 13:31:19.564924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.105 [2024-11-25 13:31:19.576985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.105 [2024-11-25 13:31:19.577024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.105 [2024-11-25 13:31:19.587824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.105 [2024-11-25 13:31:19.587847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.602036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.602062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.611969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.611992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.624166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.624190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.637263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.637314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.647110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.647150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.659346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.659388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.675991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.676016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.691089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.691115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.700406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.700446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.714437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.714464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.724440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.724465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.738509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.738536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.747880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.747905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.106 [2024-11-25 13:31:19.762058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.106 [2024-11-25 13:31:19.762085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.771694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.771719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.783808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.783833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.798948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.798988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.808385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.808415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.824664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.824689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.839126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.839155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.848649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.848675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.863366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.863392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.873146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.873177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 11573.67 IOPS, 90.42 MiB/s [2024-11-25T12:31:20.023Z] [2024-11-25 13:31:19.885159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.885183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.895648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.895672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.908108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.908134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.923549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.923574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.933429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.933456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.945701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.945725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.956698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.956722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.967760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.967783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.980837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.980863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:19.991022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:19.991046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:20.008322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:20.008357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.364 [2024-11-25 13:31:20.021709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.364 [2024-11-25 13:31:20.021757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.037121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.037150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.053758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.053800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.063144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.063169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.079574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.079623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.096169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.096195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.112057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.112084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.126894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.126929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.145375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.145402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.155568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.155594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.171824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.171848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.187511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.187553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.205275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.205324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.214915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.214940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.229908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.229932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.239763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.239787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.254580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.254606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.621 [2024-11-25 13:31:20.274497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.621 [2024-11-25 13:31:20.274536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.878 [2024-11-25 13:31:20.292146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.878 [2024-11-25 13:31:20.292170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.878 [2024-11-25 13:31:20.308177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.878 [2024-11-25 13:31:20.308202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.878 [2024-11-25 13:31:20.320744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.878 [2024-11-25 13:31:20.320771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.878 [2024-11-25 13:31:20.330458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.878 [2024-11-25 13:31:20.330487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.878 [2024-11-25 13:31:20.347004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.878 [2024-11-25 13:31:20.347029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.878 [2024-11-25 13:31:20.365687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.878 [2024-11-25 13:31:20.365712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.878 [2024-11-25 13:31:20.375878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.878 [2024-11-25 13:31:20.375904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.878 [2024-11-25 13:31:20.390857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.878 [2024-11-25 13:31:20.390881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.409638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.409677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.418992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.419016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.434542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.434567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.444357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.444394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.458209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.458234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.468355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.468393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.483262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.483309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.501810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.501834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.511936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.511959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.526514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.526540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.879 [2024-11-25 13:31:20.536339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.879 [2024-11-25 13:31:20.536381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.550358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.550398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.559786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.559824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.574398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.574424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.583716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.583741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.598410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.598435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.617914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.617938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.638010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.638035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.656069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.656093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.671820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.671846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.687514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.687540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.703669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.703708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.719753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.719777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.737799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.737823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.748754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.748778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.759703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.759727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.772847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.772873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.136 [2024-11-25 13:31:20.782369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.136 [2024-11-25 13:31:20.782393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.798330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.798356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.808167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.808191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.823181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.823205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.841609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.841648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.851751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.851776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.866555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.866594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.876263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.876311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 11513.25 IOPS, 89.95 MiB/s [2024-11-25T12:31:21.052Z] [2024-11-25 13:31:20.890958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.890998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.909339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.909380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.919531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.919556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.935025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.935049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.953615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.953640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.964495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.964520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.979102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.979131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:20.998172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:20.998199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:21.016527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:21.016553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:21.026921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:21.026961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.393 [2024-11-25 13:31:21.043221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.393 [2024-11-25 13:31:21.043248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.650 [2024-11-25 13:31:21.061901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.650 [2024-11-25 13:31:21.061927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.072120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.072145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.084444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.084478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.100461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.100488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.115333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.115359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.133561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.133587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.143284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.143331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.159593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.159633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.175401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.175428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.193546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.193572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.203566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.203600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.219855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.219880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.235733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.235759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.251729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.251755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.269579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.269605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.280081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.280105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.294069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.294095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.651 [2024-11-25 13:31:21.304070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.651 [2024-11-25 13:31:21.304094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.319185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.319210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.337401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.337429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.346982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.347009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.363151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.363176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.381551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.381578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.391219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.391245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.407310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.407359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.425622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.425648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.436629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.436654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.449734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.449760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.459685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.459709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.474455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.474490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.494025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.494051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.503827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.503851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.518439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.518465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.538393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.538417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.909 [2024-11-25 13:31:21.557999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.909 [2024-11-25 13:31:21.558023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.576480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.576505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.586504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.586530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.602800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.602824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.622031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.622057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.632237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.632262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.647350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.647375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.665319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.665345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.675025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.675051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.690923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.690947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.710426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.710452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.729957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.729982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.739583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.739608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.754641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.754666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.771294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.771351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.789087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.789113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.799158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.799182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.166 [2024-11-25 13:31:21.815261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.166 [2024-11-25 13:31:21.815286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.833300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.833334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.843626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.843650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.859045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.859071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.877391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.877433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 11507.20 IOPS, 89.90 MiB/s [2024-11-25T12:31:22.081Z] [2024-11-25 13:31:21.886739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.886764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 00:32:24.422 Latency(us) 00:32:24.422 [2024-11-25T12:31:22.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.422 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:24.422 Nvme1n1 : 5.01 11517.72 89.98 0.00 0.00 11100.72 2961.26 22524.97 00:32:24.422 [2024-11-25T12:31:22.081Z] =================================================================================================================== 00:32:24.422 [2024-11-25T12:31:22.081Z] Total : 11517.72 89.98 0.00 0.00 11100.72 2961.26 22524.97 00:32:24.422 [2024-11-25 13:31:21.893483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.893507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.901493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.901517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.909479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.909500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.921574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.921623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.933564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.933602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.945558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.945604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.953522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.953560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.961536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.961574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.973562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.973607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.981533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.981574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.989539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.989577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.422 [2024-11-25 13:31:21.997535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.422 [2024-11-25 13:31:21.997576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.005539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.005582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.017560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.017607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.025529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.025569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.033534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.033572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.041522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.041557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.049479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.049500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.057475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.057494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.065476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.065496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.423 [2024-11-25 13:31:22.073479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.423 [2024-11-25 13:31:22.073500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.679 [2024-11-25 13:31:22.081521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.679 [2024-11-25 13:31:22.081555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.679 [2024-11-25 13:31:22.089532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.679 [2024-11-25 13:31:22.089572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.679 [2024-11-25 13:31:22.097517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.679 [2024-11-25 13:31:22.097554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.679 [2024-11-25 13:31:22.105477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.679 [2024-11-25 13:31:22.105497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.679 [2024-11-25 13:31:22.113473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.679 [2024-11-25 13:31:22.113492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.679 [2024-11-25 13:31:22.121481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.679 [2024-11-25 13:31:22.121503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3324942) - No such process 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3324942 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:24.679 delay0 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.679 13:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:24.679 [2024-11-25 13:31:22.240683] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:31.226 Initializing NVMe Controllers 00:32:31.226 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:31.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:31.226 Initialization complete. Launching workers. 00:32:31.226 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 208 00:32:31.226 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 495, failed to submit 33 00:32:31.226 success 391, unsuccessful 104, failed 0 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.226 rmmod nvme_tcp 00:32:31.226 rmmod nvme_fabrics 00:32:31.226 rmmod nvme_keyring 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3323663 ']' 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3323663 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3323663 ']' 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3323663 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3323663 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3323663' 00:32:31.226 killing process with pid 3323663 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3323663 00:32:31.226 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3323663 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.485 13:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.382 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.382 00:32:33.382 real 0m27.846s 00:32:33.382 user 0m39.788s 00:32:33.382 sys 0m9.416s 00:32:33.382 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:33.382 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:33.382 ************************************ 00:32:33.382 END TEST nvmf_zcopy 00:32:33.382 ************************************ 00:32:33.382 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:33.382 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:33.382 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:33.382 13:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:33.382 ************************************ 00:32:33.382 START TEST nvmf_nmic 00:32:33.382 ************************************ 00:32:33.382 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:33.641 * Looking for test storage... 00:32:33.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.641 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:33.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.641 --rc genhtml_branch_coverage=1 00:32:33.641 --rc genhtml_function_coverage=1 00:32:33.641 --rc genhtml_legend=1 00:32:33.641 --rc geninfo_all_blocks=1 00:32:33.641 --rc geninfo_unexecuted_blocks=1 00:32:33.642 00:32:33.642 ' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:33.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.642 --rc genhtml_branch_coverage=1 00:32:33.642 --rc genhtml_function_coverage=1 00:32:33.642 --rc genhtml_legend=1 00:32:33.642 --rc geninfo_all_blocks=1 00:32:33.642 --rc geninfo_unexecuted_blocks=1 00:32:33.642 00:32:33.642 ' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:33.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.642 --rc genhtml_branch_coverage=1 00:32:33.642 --rc genhtml_function_coverage=1 00:32:33.642 --rc genhtml_legend=1 00:32:33.642 --rc geninfo_all_blocks=1 00:32:33.642 --rc geninfo_unexecuted_blocks=1 00:32:33.642 00:32:33.642 ' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:33.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.642 --rc genhtml_branch_coverage=1 00:32:33.642 --rc genhtml_function_coverage=1 00:32:33.642 --rc genhtml_legend=1 00:32:33.642 --rc geninfo_all_blocks=1 00:32:33.642 --rc geninfo_unexecuted_blocks=1 00:32:33.642 00:32:33.642 ' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.642 13:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.173 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.173 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.173 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.173 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.173 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.173 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.173 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.173 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:36.174 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:36.174 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:36.174 Found net devices under 0000:09:00.0: cvl_0_0 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:36.174 Found net devices under 0000:09:00.1: cvl_0_1 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.174 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:32:36.175 00:32:36.175 --- 10.0.0.2 ping statistics --- 00:32:36.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.175 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:32:36.175 00:32:36.175 --- 10.0.0.1 ping statistics --- 00:32:36.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.175 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3328246 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3328246 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3328246 ']' 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.175 [2024-11-25 13:31:33.556713] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:36.175 [2024-11-25 13:31:33.557903] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:32:36.175 [2024-11-25 13:31:33.557967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.175 [2024-11-25 13:31:33.633834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:36.175 [2024-11-25 13:31:33.694446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.175 [2024-11-25 13:31:33.694495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.175 [2024-11-25 13:31:33.694519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.175 [2024-11-25 13:31:33.694530] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.175 [2024-11-25 13:31:33.694540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.175 [2024-11-25 13:31:33.696156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.175 [2024-11-25 13:31:33.696337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.175 [2024-11-25 13:31:33.696271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.175 [2024-11-25 13:31:33.696332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:36.175 [2024-11-25 13:31:33.783335] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:36.175 [2024-11-25 13:31:33.783562] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:36.175 [2024-11-25 13:31:33.783844] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:36.175 [2024-11-25 13:31:33.784510] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:36.175 [2024-11-25 13:31:33.784762] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.175 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.433 [2024-11-25 13:31:33.840977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.433 Malloc0 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:36.433 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.434 [2024-11-25 13:31:33.905165] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:36.434 test case1: single bdev can't be used in multiple subsystems 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.434 [2024-11-25 13:31:33.928920] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:36.434 [2024-11-25 13:31:33.928951] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:36.434 [2024-11-25 13:31:33.928967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:36.434 request: 00:32:36.434 { 00:32:36.434 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:36.434 "namespace": { 00:32:36.434 "bdev_name": "Malloc0", 00:32:36.434 "no_auto_visible": false 00:32:36.434 }, 00:32:36.434 "method": "nvmf_subsystem_add_ns", 00:32:36.434 "req_id": 1 00:32:36.434 } 00:32:36.434 Got JSON-RPC error response 00:32:36.434 response: 00:32:36.434 { 00:32:36.434 "code": -32602, 00:32:36.434 "message": "Invalid parameters" 00:32:36.434 } 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:36.434 Adding namespace failed - expected result. 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:36.434 test case2: host connect to nvmf target in multiple paths 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:36.434 [2024-11-25 13:31:33.937008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.434 13:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:36.692 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:36.951 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:36.951 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:36.951 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:36.951 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:36.951 13:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:38.851 13:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:38.851 13:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:38.851 13:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:38.851 13:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:38.851 13:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:38.851 13:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:38.851 13:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:38.851 [global] 00:32:38.851 thread=1 00:32:38.851 invalidate=1 00:32:38.851 rw=write 00:32:38.851 time_based=1 00:32:38.851 runtime=1 00:32:38.851 ioengine=libaio 00:32:38.851 direct=1 00:32:38.851 bs=4096 00:32:38.851 iodepth=1 00:32:38.851 norandommap=0 00:32:38.851 numjobs=1 00:32:38.851 00:32:38.851 verify_dump=1 00:32:38.851 verify_backlog=512 00:32:38.851 verify_state_save=0 00:32:38.851 do_verify=1 00:32:38.851 verify=crc32c-intel 00:32:38.851 [job0] 00:32:38.851 filename=/dev/nvme0n1 00:32:38.851 Could not set queue depth (nvme0n1) 00:32:39.108 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:39.108 fio-3.35 00:32:39.108 Starting 1 thread 00:32:40.479 00:32:40.479 job0: (groupid=0, jobs=1): err= 0: pid=3328749: Mon Nov 25 13:31:37 2024 00:32:40.479 read: IOPS=26, BW=106KiB/s (108kB/s)(108KiB/1021msec) 00:32:40.479 slat (nsec): min=6860, max=36703, avg=20448.00, stdev=9538.10 00:32:40.479 clat (usec): min=238, max=42080, avg=32452.40, stdev=17536.01 00:32:40.479 lat (usec): min=247, max=42095, avg=32472.84, stdev=17541.58 00:32:40.479 clat percentiles (usec): 00:32:40.479 | 1.00th=[ 239], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 318], 00:32:40.479 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:32:40.479 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:40.479 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:40.479 | 99.99th=[42206] 00:32:40.479 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:32:40.479 slat (usec): min=18, max=29192, avg=81.36, stdev=1289.06 00:32:40.479 clat (usec): min=175, max=392, avg=193.79, stdev=13.68 00:32:40.479 lat (usec): min=195, max=29419, avg=275.15, stdev=1290.60 00:32:40.479 clat percentiles (usec): 00:32:40.479 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 188], 00:32:40.479 | 30.00th=[ 188], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 194], 00:32:40.479 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 204], 95.00th=[ 208], 00:32:40.479 | 99.00th=[ 227], 99.50th=[ 297], 99.90th=[ 392], 99.95th=[ 392], 00:32:40.479 | 99.99th=[ 392] 00:32:40.479 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:40.479 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:40.479 lat (usec) : 250=94.62%, 500=1.48% 00:32:40.479 lat (msec) : 50=3.90% 00:32:40.479 cpu : usr=0.88%, sys=1.47%, ctx=541, majf=0, minf=1 00:32:40.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:40.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:40.479 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:40.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:40.480 00:32:40.480 Run status group 0 (all jobs): 00:32:40.480 READ: bw=106KiB/s (108kB/s), 106KiB/s-106KiB/s (108kB/s-108kB/s), io=108KiB (111kB), run=1021-1021msec 00:32:40.480 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:32:40.480 00:32:40.480 Disk stats (read/write): 00:32:40.480 nvme0n1: ios=76/512, merge=0/0, ticks=1181/83, in_queue=1264, util=98.70% 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:40.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:40.480 13:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:40.480 rmmod nvme_tcp 00:32:40.480 rmmod nvme_fabrics 00:32:40.480 rmmod nvme_keyring 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3328246 ']' 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3328246 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3328246 ']' 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3328246 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3328246 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3328246' 00:32:40.480 killing process with pid 3328246 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3328246 00:32:40.480 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3328246 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.738 13:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.268 00:32:43.268 real 0m9.348s 00:32:43.268 user 0m17.606s 00:32:43.268 sys 0m3.305s 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:43.268 ************************************ 00:32:43.268 END TEST nvmf_nmic 00:32:43.268 ************************************ 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:43.268 ************************************ 00:32:43.268 START TEST nvmf_fio_target 00:32:43.268 ************************************ 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:43.268 * Looking for test storage... 00:32:43.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:43.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.268 --rc genhtml_branch_coverage=1 00:32:43.268 --rc genhtml_function_coverage=1 00:32:43.268 --rc genhtml_legend=1 00:32:43.268 --rc geninfo_all_blocks=1 00:32:43.268 --rc geninfo_unexecuted_blocks=1 00:32:43.268 00:32:43.268 ' 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:43.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.268 --rc genhtml_branch_coverage=1 00:32:43.268 --rc genhtml_function_coverage=1 00:32:43.268 --rc genhtml_legend=1 00:32:43.268 --rc geninfo_all_blocks=1 00:32:43.268 --rc geninfo_unexecuted_blocks=1 00:32:43.268 00:32:43.268 ' 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:43.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.268 --rc genhtml_branch_coverage=1 00:32:43.268 --rc genhtml_function_coverage=1 00:32:43.268 --rc genhtml_legend=1 00:32:43.268 --rc geninfo_all_blocks=1 00:32:43.268 --rc geninfo_unexecuted_blocks=1 00:32:43.268 00:32:43.268 ' 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:43.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.268 --rc genhtml_branch_coverage=1 00:32:43.268 --rc genhtml_function_coverage=1 00:32:43.268 --rc genhtml_legend=1 00:32:43.268 --rc geninfo_all_blocks=1 00:32:43.268 --rc geninfo_unexecuted_blocks=1 00:32:43.268 00:32:43.268 ' 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:43.268 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:43.269 13:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:45.175 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:45.175 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:45.175 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:45.176 Found net devices under 0000:09:00.0: cvl_0_0 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:45.176 Found net devices under 0000:09:00.1: cvl_0_1 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.176 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:32:45.506 00:32:45.506 --- 10.0.0.2 ping statistics --- 00:32:45.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.506 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:32:45.506 00:32:45.506 --- 10.0.0.1 ping statistics --- 00:32:45.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.506 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3330940 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3330940 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3330940 ']' 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.506 13:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.506 [2024-11-25 13:31:42.981163] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.506 [2024-11-25 13:31:42.982220] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:32:45.506 [2024-11-25 13:31:42.982300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.506 [2024-11-25 13:31:43.053579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:45.765 [2024-11-25 13:31:43.113619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.765 [2024-11-25 13:31:43.113682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.765 [2024-11-25 13:31:43.113702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.765 [2024-11-25 13:31:43.113713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.765 [2024-11-25 13:31:43.113723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.765 [2024-11-25 13:31:43.115347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.765 [2024-11-25 13:31:43.115372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:45.765 [2024-11-25 13:31:43.115419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:45.765 [2024-11-25 13:31:43.115423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.765 [2024-11-25 13:31:43.202568] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:45.765 [2024-11-25 13:31:43.202772] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:45.765 [2024-11-25 13:31:43.203075] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:45.765 [2024-11-25 13:31:43.203668] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.765 [2024-11-25 13:31:43.203874] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:45.765 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.765 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:45.765 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:45.765 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.765 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.765 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.765 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:46.024 [2024-11-25 13:31:43.492186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:46.024 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.282 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:46.282 13:31:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.540 13:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:46.540 13:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.797 13:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:46.797 13:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.363 13:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:47.363 13:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:47.363 13:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.929 13:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:47.929 13:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.929 13:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:47.929 13:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:48.495 13:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:48.495 13:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:48.495 13:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:48.753 13:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:48.754 13:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:49.319 13:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:49.319 13:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:49.319 13:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.577 [2024-11-25 13:31:47.176359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.577 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:49.834 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:50.093 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:50.350 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:50.350 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:50.350 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:50.350 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:50.350 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:50.350 13:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:52.876 13:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:52.876 13:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:52.876 13:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:52.876 13:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:52.876 13:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:52.876 13:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:52.876 13:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:52.876 [global] 00:32:52.876 thread=1 00:32:52.876 invalidate=1 00:32:52.876 rw=write 00:32:52.876 time_based=1 00:32:52.876 runtime=1 00:32:52.876 ioengine=libaio 00:32:52.876 direct=1 00:32:52.876 bs=4096 00:32:52.876 iodepth=1 00:32:52.876 norandommap=0 00:32:52.876 numjobs=1 00:32:52.876 00:32:52.876 verify_dump=1 00:32:52.876 verify_backlog=512 00:32:52.876 verify_state_save=0 00:32:52.876 do_verify=1 00:32:52.876 verify=crc32c-intel 00:32:52.876 [job0] 00:32:52.876 filename=/dev/nvme0n1 00:32:52.876 [job1] 00:32:52.876 filename=/dev/nvme0n2 00:32:52.876 [job2] 00:32:52.876 filename=/dev/nvme0n3 00:32:52.876 [job3] 00:32:52.876 filename=/dev/nvme0n4 00:32:52.876 Could not set queue depth (nvme0n1) 00:32:52.876 Could not set queue depth (nvme0n2) 00:32:52.876 Could not set queue depth (nvme0n3) 00:32:52.876 Could not set queue depth (nvme0n4) 00:32:52.876 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.876 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.876 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.876 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.876 fio-3.35 00:32:52.876 Starting 4 threads 00:32:53.837 00:32:53.837 job0: (groupid=0, jobs=1): err= 0: pid=3331893: Mon Nov 25 13:31:51 2024 00:32:53.837 read: IOPS=101, BW=408KiB/s (418kB/s)(412KiB/1010msec) 00:32:53.837 slat (nsec): min=4698, max=34528, avg=14353.19, stdev=7838.77 00:32:53.837 clat (usec): min=287, max=42052, avg=8604.20, stdev=16417.62 00:32:53.837 lat (usec): min=293, max=42068, avg=8618.55, stdev=16422.16 00:32:53.837 clat percentiles (usec): 00:32:53.837 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 318], 20.00th=[ 334], 00:32:53.837 | 30.00th=[ 383], 40.00th=[ 408], 50.00th=[ 445], 60.00th=[ 515], 00:32:53.837 | 70.00th=[ 578], 80.00th=[16188], 90.00th=[42206], 95.00th=[42206], 00:32:53.837 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:53.837 | 99.99th=[42206] 00:32:53.837 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:32:53.837 slat (nsec): min=5639, max=59206, avg=8745.05, stdev=3736.98 00:32:53.837 clat (usec): min=151, max=714, avg=226.09, stdev=56.71 00:32:53.837 lat (usec): min=159, max=722, avg=234.84, stdev=56.86 00:32:53.837 clat percentiles (usec): 00:32:53.837 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 178], 00:32:53.837 | 30.00th=[ 188], 40.00th=[ 206], 50.00th=[ 233], 60.00th=[ 241], 00:32:53.837 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 297], 00:32:53.837 | 99.00th=[ 400], 99.50th=[ 562], 99.90th=[ 717], 99.95th=[ 717], 00:32:53.837 | 99.99th=[ 717] 00:32:53.837 bw ( KiB/s): min= 4096, max= 4096, per=47.96%, avg=4096.00, stdev= 0.00, samples=1 00:32:53.837 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:53.837 lat (usec) : 250=60.33%, 500=32.20%, 750=4.07% 00:32:53.837 lat (msec) : 20=0.16%, 50=3.25% 00:32:53.837 cpu : usr=0.30%, sys=0.59%, ctx=615, majf=0, minf=1 00:32:53.837 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.837 issued rwts: total=103,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.837 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.837 job1: (groupid=0, jobs=1): err= 0: pid=3331894: Mon Nov 25 13:31:51 2024 00:32:53.837 read: IOPS=173, BW=694KiB/s (710kB/s)(704KiB/1015msec) 00:32:53.837 slat (nsec): min=5216, max=34230, avg=9702.95, stdev=6307.83 00:32:53.837 clat (usec): min=204, max=41241, avg=5105.36, stdev=13243.56 00:32:53.837 lat (usec): min=210, max=41251, avg=5115.06, stdev=13247.97 00:32:53.837 clat percentiles (usec): 00:32:53.837 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 225], 00:32:53.837 | 30.00th=[ 233], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 258], 00:32:53.837 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[41157], 95.00th=[41157], 00:32:53.837 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:53.837 | 99.99th=[41157] 00:32:53.837 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:32:53.837 slat (usec): min=5, max=789, avg=10.09, stdev=34.74 00:32:53.837 clat (usec): min=141, max=420, avg=209.13, stdev=63.61 00:32:53.837 lat (usec): min=147, max=968, avg=219.22, stdev=72.47 00:32:53.837 clat percentiles (usec): 00:32:53.837 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:32:53.837 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 202], 00:32:53.837 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 293], 95.00th=[ 379], 00:32:53.838 | 99.00th=[ 383], 99.50th=[ 388], 99.90th=[ 420], 99.95th=[ 420], 00:32:53.838 | 99.99th=[ 420] 00:32:53.838 bw ( KiB/s): min= 4096, max= 4096, per=47.96%, avg=4096.00, stdev= 0.00, samples=1 00:32:53.838 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:53.838 lat (usec) : 250=71.51%, 500=25.44% 00:32:53.838 lat (msec) : 50=3.05% 00:32:53.838 cpu : usr=0.49%, sys=0.39%, ctx=690, majf=0, minf=1 00:32:53.838 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.838 issued rwts: total=176,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.838 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.838 job2: (groupid=0, jobs=1): err= 0: pid=3331895: Mon Nov 25 13:31:51 2024 00:32:53.838 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:32:53.838 slat (nsec): min=5206, max=45444, avg=9483.57, stdev=6032.43 00:32:53.838 clat (usec): min=203, max=41931, avg=1691.06, stdev=7502.83 00:32:53.838 lat (usec): min=211, max=41949, avg=1700.55, stdev=7505.49 00:32:53.838 clat percentiles (usec): 00:32:53.838 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 235], 00:32:53.838 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:32:53.838 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 486], 00:32:53.838 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:32:53.838 | 99.99th=[41681] 00:32:53.838 write: IOPS=630, BW=2521KiB/s (2582kB/s)(2524KiB/1001msec); 0 zone resets 00:32:53.838 slat (nsec): min=8073, max=50082, avg=11477.87, stdev=5211.37 00:32:53.838 clat (usec): min=160, max=340, avg=188.20, stdev=23.53 00:32:53.838 lat (usec): min=172, max=371, avg=199.68, stdev=25.81 00:32:53.838 clat percentiles (usec): 00:32:53.838 | 1.00th=[ 165], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 174], 00:32:53.838 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:32:53.838 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 231], 00:32:53.838 | 99.00th=[ 285], 99.50th=[ 322], 99.90th=[ 343], 99.95th=[ 343], 00:32:53.838 | 99.99th=[ 343] 00:32:53.838 bw ( KiB/s): min= 4096, max= 4096, per=47.96%, avg=4096.00, stdev= 0.00, samples=1 00:32:53.838 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:53.838 lat (usec) : 250=78.74%, 500=19.16%, 750=0.44% 00:32:53.838 lat (msec) : 4=0.09%, 50=1.57% 00:32:53.838 cpu : usr=0.90%, sys=1.30%, ctx=1145, majf=0, minf=1 00:32:53.838 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.838 issued rwts: total=512,631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.838 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.838 job3: (groupid=0, jobs=1): err= 0: pid=3331896: Mon Nov 25 13:31:51 2024 00:32:53.838 read: IOPS=409, BW=1638KiB/s (1678kB/s)(1640KiB/1001msec) 00:32:53.838 slat (nsec): min=4496, max=59619, avg=13589.66, stdev=12133.79 00:32:53.838 clat (usec): min=197, max=41196, avg=2089.29, stdev=8224.30 00:32:53.838 lat (usec): min=211, max=41207, avg=2102.87, stdev=8225.84 00:32:53.838 clat percentiles (usec): 00:32:53.838 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 231], 00:32:53.838 | 30.00th=[ 237], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 269], 00:32:53.838 | 70.00th=[ 347], 80.00th=[ 441], 90.00th=[ 478], 95.00th=[ 523], 00:32:53.838 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:53.838 | 99.99th=[41157] 00:32:53.838 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:32:53.838 slat (nsec): min=6233, max=40144, avg=9972.28, stdev=4110.68 00:32:53.838 clat (usec): min=150, max=394, avg=254.04, stdev=47.77 00:32:53.838 lat (usec): min=159, max=412, avg=264.01, stdev=47.10 00:32:53.838 clat percentiles (usec): 00:32:53.838 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 202], 20.00th=[ 227], 00:32:53.838 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:32:53.838 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 310], 95.00th=[ 379], 00:32:53.838 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 396], 99.95th=[ 396], 00:32:53.838 | 99.99th=[ 396] 00:32:53.838 bw ( KiB/s): min= 4096, max= 4096, per=47.96%, avg=4096.00, stdev= 0.00, samples=1 00:32:53.838 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:53.838 lat (usec) : 250=48.48%, 500=48.81%, 750=0.65% 00:32:53.838 lat (msec) : 20=0.11%, 50=1.95% 00:32:53.838 cpu : usr=0.50%, sys=1.10%, ctx=923, majf=0, minf=1 00:32:53.838 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:53.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:53.838 issued rwts: total=410,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:53.838 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:53.838 00:32:53.838 Run status group 0 (all jobs): 00:32:53.838 READ: bw=4733KiB/s (4847kB/s), 408KiB/s-2046KiB/s (418kB/s-2095kB/s), io=4804KiB (4919kB), run=1001-1015msec 00:32:53.838 WRITE: bw=8540KiB/s (8745kB/s), 2018KiB/s-2521KiB/s (2066kB/s-2582kB/s), io=8668KiB (8876kB), run=1001-1015msec 00:32:53.838 00:32:53.838 Disk stats (read/write): 00:32:53.838 nvme0n1: ios=148/512, merge=0/0, ticks=733/112, in_queue=845, util=86.57% 00:32:53.838 nvme0n2: ios=69/512, merge=0/0, ticks=904/101, in_queue=1005, util=98.37% 00:32:53.838 nvme0n3: ios=492/512, merge=0/0, ticks=1211/92, in_queue=1303, util=98.43% 00:32:53.838 nvme0n4: ios=282/512, merge=0/0, ticks=1006/126, in_queue=1132, util=97.89% 00:32:53.838 13:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:53.838 [global] 00:32:53.838 thread=1 00:32:53.838 invalidate=1 00:32:53.838 rw=randwrite 00:32:53.838 time_based=1 00:32:53.838 runtime=1 00:32:53.838 ioengine=libaio 00:32:53.838 direct=1 00:32:53.838 bs=4096 00:32:53.838 iodepth=1 00:32:53.838 norandommap=0 00:32:53.838 numjobs=1 00:32:53.838 00:32:53.838 verify_dump=1 00:32:53.838 verify_backlog=512 00:32:53.838 verify_state_save=0 00:32:53.838 do_verify=1 00:32:53.838 verify=crc32c-intel 00:32:53.838 [job0] 00:32:53.838 filename=/dev/nvme0n1 00:32:53.838 [job1] 00:32:53.838 filename=/dev/nvme0n2 00:32:53.838 [job2] 00:32:53.838 filename=/dev/nvme0n3 00:32:53.838 [job3] 00:32:53.838 filename=/dev/nvme0n4 00:32:53.838 Could not set queue depth (nvme0n1) 00:32:53.838 Could not set queue depth (nvme0n2) 00:32:53.838 Could not set queue depth (nvme0n3) 00:32:53.838 Could not set queue depth (nvme0n4) 00:32:54.096 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.096 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.096 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.096 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:54.096 fio-3.35 00:32:54.096 Starting 4 threads 00:32:55.476 00:32:55.476 job0: (groupid=0, jobs=1): err= 0: pid=3332116: Mon Nov 25 13:31:52 2024 00:32:55.476 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:32:55.476 slat (nsec): min=6192, max=45411, avg=14957.18, stdev=7083.13 00:32:55.476 clat (usec): min=40749, max=41176, avg=40963.79, stdev=89.95 00:32:55.476 lat (usec): min=40755, max=41190, avg=40978.75, stdev=89.46 00:32:55.476 clat percentiles (usec): 00:32:55.476 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:55.476 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:55.476 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:55.476 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:55.476 | 99.99th=[41157] 00:32:55.476 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:32:55.476 slat (nsec): min=6629, max=29244, avg=9177.84, stdev=2256.56 00:32:55.476 clat (usec): min=144, max=447, avg=225.23, stdev=35.84 00:32:55.476 lat (usec): min=153, max=458, avg=234.40, stdev=36.00 00:32:55.476 clat percentiles (usec): 00:32:55.476 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:32:55.476 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:32:55.476 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 281], 95.00th=[ 302], 00:32:55.476 | 99.00th=[ 343], 99.50th=[ 412], 99.90th=[ 449], 99.95th=[ 449], 00:32:55.476 | 99.99th=[ 449] 00:32:55.476 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:32:55.476 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:55.476 lat (usec) : 250=82.58%, 500=13.30% 00:32:55.476 lat (msec) : 50=4.12% 00:32:55.476 cpu : usr=0.49%, sys=0.29%, ctx=535, majf=0, minf=1 00:32:55.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.476 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.476 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.476 job1: (groupid=0, jobs=1): err= 0: pid=3332117: Mon Nov 25 13:31:52 2024 00:32:55.476 read: IOPS=22, BW=89.9KiB/s (92.1kB/s)(92.0KiB/1023msec) 00:32:55.476 slat (nsec): min=5480, max=46699, avg=14954.74, stdev=7418.78 00:32:55.476 clat (usec): min=239, max=41164, avg=39181.49, stdev=8490.22 00:32:55.476 lat (usec): min=245, max=41178, avg=39196.44, stdev=8492.08 00:32:55.476 clat percentiles (usec): 00:32:55.476 | 1.00th=[ 239], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:55.476 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:55.476 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:55.476 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:55.476 | 99.99th=[41157] 00:32:55.476 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:32:55.476 slat (nsec): min=6264, max=29586, avg=8449.91, stdev=2457.45 00:32:55.476 clat (usec): min=150, max=816, avg=226.42, stdev=49.55 00:32:55.476 lat (usec): min=157, max=826, avg=234.87, stdev=50.08 00:32:55.476 clat percentiles (usec): 00:32:55.476 | 1.00th=[ 155], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 206], 00:32:55.476 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 223], 00:32:55.476 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 285], 95.00th=[ 297], 00:32:55.476 | 99.00th=[ 347], 99.50th=[ 412], 99.90th=[ 816], 99.95th=[ 816], 00:32:55.476 | 99.99th=[ 816] 00:32:55.476 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:32:55.477 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:55.477 lat (usec) : 250=80.93%, 500=14.58%, 750=0.19%, 1000=0.19% 00:32:55.477 lat (msec) : 50=4.11% 00:32:55.477 cpu : usr=0.10%, sys=0.59%, ctx=536, majf=0, minf=1 00:32:55.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.477 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.477 job2: (groupid=0, jobs=1): err= 0: pid=3332118: Mon Nov 25 13:31:52 2024 00:32:55.477 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:32:55.477 slat (nsec): min=12574, max=35059, avg=16278.59, stdev=6780.67 00:32:55.477 clat (usec): min=40959, max=42027, avg=41679.74, stdev=460.33 00:32:55.477 lat (usec): min=40973, max=42055, avg=41696.02, stdev=462.25 00:32:55.477 clat percentiles (usec): 00:32:55.477 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:55.477 | 30.00th=[41157], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:55.477 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:55.477 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:55.477 | 99.99th=[42206] 00:32:55.477 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:32:55.477 slat (nsec): min=6416, max=50111, avg=9278.80, stdev=3157.77 00:32:55.477 clat (usec): min=158, max=310, avg=206.53, stdev=13.43 00:32:55.477 lat (usec): min=167, max=360, avg=215.81, stdev=14.29 00:32:55.477 clat percentiles (usec): 00:32:55.477 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 196], 00:32:55.477 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:32:55.477 | 70.00th=[ 212], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 227], 00:32:55.477 | 99.00th=[ 247], 99.50th=[ 277], 99.90th=[ 310], 99.95th=[ 310], 00:32:55.477 | 99.99th=[ 310] 00:32:55.477 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:32:55.477 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:55.477 lat (usec) : 250=94.94%, 500=0.94% 00:32:55.477 lat (msec) : 50=4.12% 00:32:55.477 cpu : usr=0.19%, sys=0.39%, ctx=534, majf=0, minf=2 00:32:55.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.477 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.477 job3: (groupid=0, jobs=1): err= 0: pid=3332119: Mon Nov 25 13:31:52 2024 00:32:55.477 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:32:55.477 slat (nsec): min=6650, max=34471, avg=14851.96, stdev=6513.82 00:32:55.477 clat (usec): min=234, max=41176, avg=39206.34, stdev=8495.80 00:32:55.477 lat (usec): min=250, max=41182, avg=39221.19, stdev=8495.61 00:32:55.477 clat percentiles (usec): 00:32:55.477 | 1.00th=[ 235], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:55.477 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:55.477 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:55.477 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:55.477 | 99.99th=[41157] 00:32:55.477 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:32:55.477 slat (nsec): min=5883, max=24487, avg=8157.41, stdev=2905.63 00:32:55.477 clat (usec): min=136, max=1070, avg=233.72, stdev=70.76 00:32:55.477 lat (usec): min=143, max=1077, avg=241.88, stdev=71.29 00:32:55.477 clat percentiles (usec): 00:32:55.477 | 1.00th=[ 151], 5.00th=[ 172], 10.00th=[ 204], 20.00th=[ 212], 00:32:55.477 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 225], 00:32:55.477 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 277], 95.00th=[ 318], 00:32:55.477 | 99.00th=[ 383], 99.50th=[ 766], 99.90th=[ 1074], 99.95th=[ 1074], 00:32:55.477 | 99.99th=[ 1074] 00:32:55.477 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:32:55.477 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:55.477 lat (usec) : 250=82.62%, 500=12.34%, 750=0.37%, 1000=0.37% 00:32:55.477 lat (msec) : 2=0.19%, 50=4.11% 00:32:55.477 cpu : usr=0.29%, sys=0.29%, ctx=537, majf=0, minf=1 00:32:55.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:55.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.477 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:55.477 00:32:55.477 Run status group 0 (all jobs): 00:32:55.477 READ: bw=350KiB/s (358kB/s), 85.5KiB/s-89.9KiB/s (87.6kB/s-92.1kB/s), io=360KiB (369kB), run=1023-1029msec 00:32:55.477 WRITE: bw=7961KiB/s (8152kB/s), 1990KiB/s-2002KiB/s (2038kB/s-2050kB/s), io=8192KiB (8389kB), run=1023-1029msec 00:32:55.477 00:32:55.477 Disk stats (read/write): 00:32:55.477 nvme0n1: ios=43/512, merge=0/0, ticks=1681/112, in_queue=1793, util=97.29% 00:32:55.477 nvme0n2: ios=59/512, merge=0/0, ticks=1586/110, in_queue=1696, util=99.08% 00:32:55.477 nvme0n3: ios=17/512, merge=0/0, ticks=708/101, in_queue=809, util=88.77% 00:32:55.477 nvme0n4: ios=66/512, merge=0/0, ticks=967/121, in_queue=1088, util=99.26% 00:32:55.477 13:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:55.477 [global] 00:32:55.477 thread=1 00:32:55.477 invalidate=1 00:32:55.477 rw=write 00:32:55.477 time_based=1 00:32:55.477 runtime=1 00:32:55.477 ioengine=libaio 00:32:55.477 direct=1 00:32:55.477 bs=4096 00:32:55.477 iodepth=128 00:32:55.477 norandommap=0 00:32:55.477 numjobs=1 00:32:55.477 00:32:55.477 verify_dump=1 00:32:55.477 verify_backlog=512 00:32:55.477 verify_state_save=0 00:32:55.477 do_verify=1 00:32:55.477 verify=crc32c-intel 00:32:55.477 [job0] 00:32:55.477 filename=/dev/nvme0n1 00:32:55.477 [job1] 00:32:55.477 filename=/dev/nvme0n2 00:32:55.477 [job2] 00:32:55.477 filename=/dev/nvme0n3 00:32:55.477 [job3] 00:32:55.477 filename=/dev/nvme0n4 00:32:55.477 Could not set queue depth (nvme0n1) 00:32:55.477 Could not set queue depth (nvme0n2) 00:32:55.477 Could not set queue depth (nvme0n3) 00:32:55.477 Could not set queue depth (nvme0n4) 00:32:55.736 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:55.736 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:55.736 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:55.736 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:55.736 fio-3.35 00:32:55.736 Starting 4 threads 00:32:57.112 00:32:57.112 job0: (groupid=0, jobs=1): err= 0: pid=3332470: Mon Nov 25 13:31:54 2024 00:32:57.112 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:32:57.112 slat (usec): min=2, max=14428, avg=95.43, stdev=683.92 00:32:57.112 clat (usec): min=1276, max=42768, avg=12741.73, stdev=4242.55 00:32:57.112 lat (usec): min=1409, max=42777, avg=12837.16, stdev=4283.52 00:32:57.112 clat percentiles (usec): 00:32:57.112 | 1.00th=[ 5800], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10290], 00:32:57.112 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:32:57.112 | 70.00th=[12911], 80.00th=[15008], 90.00th=[18482], 95.00th=[20841], 00:32:57.112 | 99.00th=[27395], 99.50th=[29492], 99.90th=[29492], 99.95th=[31589], 00:32:57.112 | 99.99th=[42730] 00:32:57.112 write: IOPS=5205, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1004msec); 0 zone resets 00:32:57.112 slat (usec): min=3, max=13255, avg=84.64, stdev=595.00 00:32:57.112 clat (usec): min=596, max=38417, avg=11892.98, stdev=4710.95 00:32:57.112 lat (usec): min=612, max=38423, avg=11977.62, stdev=4751.97 00:32:57.112 clat percentiles (usec): 00:32:57.112 | 1.00th=[ 807], 5.00th=[ 5014], 10.00th=[ 9110], 20.00th=[10028], 00:32:57.112 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:32:57.112 | 70.00th=[11994], 80.00th=[12780], 90.00th=[16450], 95.00th=[18482], 00:32:57.112 | 99.00th=[34341], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:32:57.112 | 99.99th=[38536] 00:32:57.112 bw ( KiB/s): min=19960, max=21000, per=32.19%, avg=20480.00, stdev=735.39, samples=2 00:32:57.112 iops : min= 4990, max= 5250, avg=5120.00, stdev=183.85, samples=2 00:32:57.112 lat (usec) : 750=0.27%, 1000=0.54% 00:32:57.112 lat (msec) : 2=0.48%, 4=0.17%, 10=16.81%, 20=75.88%, 50=5.84% 00:32:57.112 cpu : usr=4.39%, sys=7.88%, ctx=454, majf=0, minf=1 00:32:57.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.112 issued rwts: total=5120,5226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.112 job1: (groupid=0, jobs=1): err= 0: pid=3332471: Mon Nov 25 13:31:54 2024 00:32:57.112 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:32:57.112 slat (usec): min=2, max=29797, avg=152.85, stdev=1313.62 00:32:57.112 clat (usec): min=1260, max=101993, avg=21169.38, stdev=18872.34 00:32:57.112 lat (usec): min=1264, max=102006, avg=21322.23, stdev=19011.58 00:32:57.112 clat percentiles (msec): 00:32:57.112 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:32:57.112 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 17], 00:32:57.112 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 49], 95.00th=[ 71], 00:32:57.112 | 99.00th=[ 88], 99.50th=[ 88], 99.90th=[ 102], 99.95th=[ 102], 00:32:57.112 | 99.99th=[ 103] 00:32:57.112 write: IOPS=3620, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1009msec); 0 zone resets 00:32:57.112 slat (usec): min=3, max=17048, avg=112.11, stdev=865.74 00:32:57.112 clat (usec): min=1110, max=44231, avg=14164.12, stdev=6063.88 00:32:57.112 lat (usec): min=1114, max=44239, avg=14276.22, stdev=6125.56 00:32:57.112 clat percentiles (usec): 00:32:57.112 | 1.00th=[ 3097], 5.00th=[ 4490], 10.00th=[ 7767], 20.00th=[10159], 00:32:57.112 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13042], 60.00th=[13829], 00:32:57.112 | 70.00th=[15664], 80.00th=[18482], 90.00th=[21890], 95.00th=[27919], 00:32:57.112 | 99.00th=[31065], 99.50th=[32375], 99.90th=[44303], 99.95th=[44303], 00:32:57.112 | 99.99th=[44303] 00:32:57.112 bw ( KiB/s): min=11352, max=17320, per=22.54%, avg=14336.00, stdev=4220.01, samples=2 00:32:57.112 iops : min= 2838, max= 4330, avg=3584.00, stdev=1055.00, samples=2 00:32:57.112 lat (msec) : 2=0.59%, 4=1.80%, 10=13.36%, 20=62.73%, 50=16.64% 00:32:57.112 lat (msec) : 100=4.77%, 250=0.11% 00:32:57.112 cpu : usr=1.98%, sys=4.76%, ctx=269, majf=0, minf=1 00:32:57.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.112 issued rwts: total=3584,3653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.112 job2: (groupid=0, jobs=1): err= 0: pid=3332472: Mon Nov 25 13:31:54 2024 00:32:57.112 read: IOPS=2449, BW=9799KiB/s (10.0MB/s)(9828KiB/1003msec) 00:32:57.112 slat (usec): min=2, max=21217, avg=226.50, stdev=1520.81 00:32:57.112 clat (usec): min=676, max=71935, avg=29349.35, stdev=16183.83 00:32:57.112 lat (usec): min=5907, max=71941, avg=29575.85, stdev=16258.04 00:32:57.112 clat percentiles (usec): 00:32:57.112 | 1.00th=[ 6128], 5.00th=[10159], 10.00th=[11076], 20.00th=[11863], 00:32:57.112 | 30.00th=[15270], 40.00th=[21365], 50.00th=[27657], 60.00th=[35914], 00:32:57.112 | 70.00th=[40109], 80.00th=[45876], 90.00th=[50594], 95.00th=[54264], 00:32:57.112 | 99.00th=[65799], 99.50th=[65799], 99.90th=[66323], 99.95th=[66847], 00:32:57.112 | 99.99th=[71828] 00:32:57.112 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:32:57.112 slat (usec): min=3, max=38044, avg=166.19, stdev=1515.32 00:32:57.112 clat (usec): min=6214, max=72843, avg=21141.53, stdev=13550.67 00:32:57.112 lat (usec): min=6689, max=72874, avg=21307.72, stdev=13717.39 00:32:57.112 clat percentiles (usec): 00:32:57.112 | 1.00th=[ 8356], 5.00th=[10290], 10.00th=[11731], 20.00th=[12256], 00:32:57.112 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13829], 60.00th=[14353], 00:32:57.112 | 70.00th=[21627], 80.00th=[34341], 90.00th=[45876], 95.00th=[48497], 00:32:57.112 | 99.00th=[56886], 99.50th=[57410], 99.90th=[65799], 99.95th=[68682], 00:32:57.112 | 99.99th=[72877] 00:32:57.112 bw ( KiB/s): min= 8192, max=12288, per=16.10%, avg=10240.00, stdev=2896.31, samples=2 00:32:57.112 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:32:57.112 lat (usec) : 750=0.02% 00:32:57.112 lat (msec) : 10=3.41%, 20=51.05%, 50=36.93%, 100=8.59% 00:32:57.112 cpu : usr=2.50%, sys=4.09%, ctx=247, majf=0, minf=2 00:32:57.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:32:57.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.112 issued rwts: total=2457,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.112 job3: (groupid=0, jobs=1): err= 0: pid=3332473: Mon Nov 25 13:31:54 2024 00:32:57.112 read: IOPS=4530, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1009msec) 00:32:57.112 slat (usec): min=3, max=11051, avg=106.17, stdev=693.49 00:32:57.112 clat (usec): min=3223, max=34315, avg=14280.55, stdev=3351.31 00:32:57.112 lat (usec): min=4664, max=34322, avg=14386.72, stdev=3379.34 00:32:57.112 clat percentiles (usec): 00:32:57.113 | 1.00th=[ 8717], 5.00th=[10421], 10.00th=[11076], 20.00th=[11600], 00:32:57.113 | 30.00th=[12518], 40.00th=[13304], 50.00th=[13435], 60.00th=[14091], 00:32:57.113 | 70.00th=[14746], 80.00th=[16057], 90.00th=[19006], 95.00th=[21627], 00:32:57.113 | 99.00th=[24511], 99.50th=[25035], 99.90th=[34341], 99.95th=[34341], 00:32:57.113 | 99.99th=[34341] 00:32:57.113 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:32:57.113 slat (usec): min=3, max=14256, avg=104.47, stdev=691.45 00:32:57.113 clat (usec): min=4274, max=28209, avg=13392.12, stdev=2237.81 00:32:57.113 lat (usec): min=4632, max=28226, avg=13496.59, stdev=2276.33 00:32:57.113 clat percentiles (usec): 00:32:57.113 | 1.00th=[ 7635], 5.00th=[10290], 10.00th=[10945], 20.00th=[11994], 00:32:57.113 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:32:57.113 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15795], 95.00th=[16712], 00:32:57.113 | 99.00th=[20055], 99.50th=[22676], 99.90th=[25035], 99.95th=[25035], 00:32:57.113 | 99.99th=[28181] 00:32:57.113 bw ( KiB/s): min=16384, max=20480, per=28.97%, avg=18432.00, stdev=2896.31, samples=2 00:32:57.113 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:32:57.113 lat (msec) : 4=0.01%, 10=3.86%, 20=92.01%, 50=4.12% 00:32:57.113 cpu : usr=5.36%, sys=7.24%, ctx=311, majf=0, minf=1 00:32:57.113 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:57.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.113 issued rwts: total=4571,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.113 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.113 00:32:57.113 Run status group 0 (all jobs): 00:32:57.113 READ: bw=60.9MiB/s (63.9MB/s), 9799KiB/s-19.9MiB/s (10.0MB/s-20.9MB/s), io=61.5MiB (64.4MB), run=1003-1009msec 00:32:57.113 WRITE: bw=62.1MiB/s (65.1MB/s), 9.97MiB/s-20.3MiB/s (10.5MB/s-21.3MB/s), io=62.7MiB (65.7MB), run=1003-1009msec 00:32:57.113 00:32:57.113 Disk stats (read/write): 00:32:57.113 nvme0n1: ios=4122/4581, merge=0/0, ticks=37240/34262, in_queue=71502, util=98.30% 00:32:57.113 nvme0n2: ios=3397/3584, merge=0/0, ticks=40417/37564, in_queue=77981, util=98.48% 00:32:57.113 nvme0n3: ios=1762/2048, merge=0/0, ticks=28918/21782, in_queue=50700, util=99.06% 00:32:57.113 nvme0n4: ios=3886/4096, merge=0/0, ticks=26110/21181, in_queue=47291, util=98.32% 00:32:57.113 13:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:57.113 [global] 00:32:57.113 thread=1 00:32:57.113 invalidate=1 00:32:57.113 rw=randwrite 00:32:57.113 time_based=1 00:32:57.113 runtime=1 00:32:57.113 ioengine=libaio 00:32:57.113 direct=1 00:32:57.113 bs=4096 00:32:57.113 iodepth=128 00:32:57.113 norandommap=0 00:32:57.113 numjobs=1 00:32:57.113 00:32:57.113 verify_dump=1 00:32:57.113 verify_backlog=512 00:32:57.113 verify_state_save=0 00:32:57.113 do_verify=1 00:32:57.113 verify=crc32c-intel 00:32:57.113 [job0] 00:32:57.113 filename=/dev/nvme0n1 00:32:57.113 [job1] 00:32:57.113 filename=/dev/nvme0n2 00:32:57.113 [job2] 00:32:57.113 filename=/dev/nvme0n3 00:32:57.113 [job3] 00:32:57.113 filename=/dev/nvme0n4 00:32:57.113 Could not set queue depth (nvme0n1) 00:32:57.113 Could not set queue depth (nvme0n2) 00:32:57.113 Could not set queue depth (nvme0n3) 00:32:57.113 Could not set queue depth (nvme0n4) 00:32:57.113 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:57.113 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:57.113 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:57.113 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:57.113 fio-3.35 00:32:57.113 Starting 4 threads 00:32:58.488 00:32:58.488 job0: (groupid=0, jobs=1): err= 0: pid=3332696: Mon Nov 25 13:31:55 2024 00:32:58.488 read: IOPS=4065, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1009msec) 00:32:58.488 slat (usec): min=3, max=12674, avg=97.60, stdev=777.21 00:32:58.488 clat (usec): min=977, max=43490, avg=12404.82, stdev=5971.43 00:32:58.488 lat (usec): min=1060, max=43500, avg=12502.42, stdev=6038.92 00:32:58.488 clat percentiles (usec): 00:32:58.488 | 1.00th=[ 2343], 5.00th=[ 4490], 10.00th=[ 6194], 20.00th=[ 9110], 00:32:58.488 | 30.00th=[ 9896], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:32:58.488 | 70.00th=[12911], 80.00th=[14222], 90.00th=[16450], 95.00th=[23987], 00:32:58.488 | 99.00th=[37487], 99.50th=[39584], 99.90th=[40109], 99.95th=[43254], 00:32:58.488 | 99.99th=[43254] 00:32:58.488 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:32:58.488 slat (usec): min=3, max=8825, avg=109.51, stdev=577.09 00:32:58.488 clat (usec): min=212, max=63502, avg=16609.31, stdev=10811.33 00:32:58.488 lat (usec): min=242, max=63510, avg=16718.82, stdev=10876.01 00:32:58.488 clat percentiles (usec): 00:32:58.488 | 1.00th=[ 1549], 5.00th=[ 4424], 10.00th=[ 7767], 20.00th=[ 8848], 00:32:58.489 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[11469], 60.00th=[14877], 00:32:58.489 | 70.00th=[19006], 80.00th=[24511], 90.00th=[34341], 95.00th=[37487], 00:32:58.489 | 99.00th=[50594], 99.50th=[53740], 99.90th=[58983], 99.95th=[58983], 00:32:58.489 | 99.99th=[63701] 00:32:58.489 bw ( KiB/s): min=17128, max=18760, per=27.14%, avg=17944.00, stdev=1154.00, samples=2 00:32:58.489 iops : min= 4282, max= 4690, avg=4486.00, stdev=288.50, samples=2 00:32:58.489 lat (usec) : 250=0.01%, 750=0.03%, 1000=0.16% 00:32:58.489 lat (msec) : 2=1.33%, 4=2.10%, 10=28.24%, 20=49.63%, 50=17.91% 00:32:58.489 lat (msec) : 100=0.57% 00:32:58.489 cpu : usr=3.87%, sys=5.26%, ctx=423, majf=0, minf=1 00:32:58.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:58.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.489 issued rwts: total=4102,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.489 job1: (groupid=0, jobs=1): err= 0: pid=3332697: Mon Nov 25 13:31:55 2024 00:32:58.489 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:32:58.489 slat (usec): min=2, max=21839, avg=108.11, stdev=759.60 00:32:58.489 clat (usec): min=7805, max=61313, avg=14572.35, stdev=7484.24 00:32:58.489 lat (usec): min=7813, max=61322, avg=14680.46, stdev=7537.16 00:32:58.489 clat percentiles (usec): 00:32:58.489 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10159], 00:32:58.489 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[12125], 00:32:58.489 | 70.00th=[14353], 80.00th=[18744], 90.00th=[23725], 95.00th=[29492], 00:32:58.489 | 99.00th=[44827], 99.50th=[48497], 99.90th=[61080], 99.95th=[61080], 00:32:58.489 | 99.99th=[61080] 00:32:58.489 write: IOPS=4383, BW=17.1MiB/s (18.0MB/s)(17.2MiB/1006msec); 0 zone resets 00:32:58.489 slat (usec): min=3, max=10437, avg=116.83, stdev=558.36 00:32:58.489 clat (usec): min=1084, max=74971, avg=15198.58, stdev=12234.45 00:32:58.489 lat (usec): min=1093, max=75001, avg=15315.41, stdev=12307.63 00:32:58.489 clat percentiles (usec): 00:32:58.489 | 1.00th=[ 3884], 5.00th=[ 7701], 10.00th=[ 8225], 20.00th=[ 9634], 00:32:58.489 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:32:58.489 | 70.00th=[11469], 80.00th=[12518], 90.00th=[33817], 95.00th=[40109], 00:32:58.489 | 99.00th=[70779], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:32:58.489 | 99.99th=[74974] 00:32:58.489 bw ( KiB/s): min=16040, max=18216, per=25.90%, avg=17128.00, stdev=1538.66, samples=2 00:32:58.489 iops : min= 4010, max= 4554, avg=4282.00, stdev=384.67, samples=2 00:32:58.489 lat (msec) : 2=0.06%, 4=0.58%, 10=18.05%, 20=64.82%, 50=14.75% 00:32:58.489 lat (msec) : 100=1.74% 00:32:58.489 cpu : usr=5.07%, sys=8.76%, ctx=433, majf=0, minf=1 00:32:58.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:58.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.489 issued rwts: total=4096,4410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.489 job2: (groupid=0, jobs=1): err= 0: pid=3332700: Mon Nov 25 13:31:55 2024 00:32:58.489 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.3MiB/1045msec) 00:32:58.489 slat (usec): min=2, max=21431, avg=169.83, stdev=1097.07 00:32:58.489 clat (usec): min=9720, max=88192, avg=23965.42, stdev=12822.58 00:32:58.489 lat (usec): min=10134, max=88205, avg=24135.26, stdev=12875.82 00:32:58.489 clat percentiles (usec): 00:32:58.489 | 1.00th=[10290], 5.00th=[12780], 10.00th=[13435], 20.00th=[15401], 00:32:58.489 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19530], 60.00th=[21103], 00:32:58.489 | 70.00th=[24773], 80.00th=[28967], 90.00th=[39060], 95.00th=[51119], 00:32:58.489 | 99.00th=[74974], 99.50th=[78119], 99.90th=[88605], 99.95th=[88605], 00:32:58.489 | 99.99th=[88605] 00:32:58.489 write: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1045msec); 0 zone resets 00:32:58.489 slat (usec): min=3, max=17473, avg=173.41, stdev=962.57 00:32:58.489 clat (usec): min=8944, max=59330, avg=22436.75, stdev=10811.36 00:32:58.489 lat (usec): min=8948, max=59343, avg=22610.16, stdev=10839.22 00:32:58.489 clat percentiles (usec): 00:32:58.489 | 1.00th=[ 9110], 5.00th=[11863], 10.00th=[12911], 20.00th=[16319], 00:32:58.489 | 30.00th=[16909], 40.00th=[17171], 50.00th=[18482], 60.00th=[22414], 00:32:58.489 | 70.00th=[23462], 80.00th=[26346], 90.00th=[41681], 95.00th=[50070], 00:32:58.489 | 99.00th=[58983], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:32:58.489 | 99.99th=[59507] 00:32:58.489 bw ( KiB/s): min=11848, max=12288, per=18.25%, avg=12068.00, stdev=311.13, samples=2 00:32:58.489 iops : min= 2962, max= 3072, avg=3017.00, stdev=77.78, samples=2 00:32:58.489 lat (msec) : 10=1.77%, 20=54.83%, 50=37.77%, 100=5.63% 00:32:58.489 cpu : usr=2.78%, sys=4.12%, ctx=270, majf=0, minf=1 00:32:58.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:32:58.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.489 issued rwts: total=2633,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.489 job3: (groupid=0, jobs=1): err= 0: pid=3332701: Mon Nov 25 13:31:55 2024 00:32:58.489 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:32:58.489 slat (usec): min=2, max=5875, avg=94.54, stdev=549.71 00:32:58.489 clat (usec): min=5540, max=55277, avg=12484.11, stdev=1756.23 00:32:58.489 lat (usec): min=5548, max=57892, avg=12578.65, stdev=1804.90 00:32:58.489 clat percentiles (usec): 00:32:58.489 | 1.00th=[ 7635], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[11338], 00:32:58.489 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[12911], 00:32:58.489 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14222], 95.00th=[14877], 00:32:58.489 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:32:58.489 | 99.99th=[55313] 00:32:58.489 write: IOPS=5160, BW=20.2MiB/s (21.1MB/s)(20.3MiB/1005msec); 0 zone resets 00:32:58.489 slat (usec): min=3, max=6394, avg=91.66, stdev=559.60 00:32:58.489 clat (usec): min=3489, max=18859, avg=12172.78, stdev=1717.25 00:32:58.489 lat (usec): min=3590, max=18900, avg=12264.44, stdev=1722.66 00:32:58.489 clat percentiles (usec): 00:32:58.489 | 1.00th=[ 6652], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[11207], 00:32:58.489 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:32:58.489 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:32:58.489 | 99.00th=[15533], 99.50th=[16188], 99.90th=[17171], 99.95th=[17695], 00:32:58.489 | 99.99th=[18744] 00:32:58.489 bw ( KiB/s): min=20480, max=20480, per=30.97%, avg=20480.00, stdev= 0.00, samples=2 00:32:58.489 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:32:58.489 lat (msec) : 4=0.06%, 10=9.89%, 20=90.04%, 100=0.01% 00:32:58.489 cpu : usr=4.98%, sys=7.97%, ctx=317, majf=0, minf=1 00:32:58.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:58.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:58.489 issued rwts: total=5120,5186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:58.489 00:32:58.489 Run status group 0 (all jobs): 00:32:58.489 READ: bw=59.6MiB/s (62.5MB/s), 9.84MiB/s-19.9MiB/s (10.3MB/s-20.9MB/s), io=62.3MiB (65.3MB), run=1005-1045msec 00:32:58.489 WRITE: bw=64.6MiB/s (67.7MB/s), 11.5MiB/s-20.2MiB/s (12.0MB/s-21.1MB/s), io=67.5MiB (70.8MB), run=1005-1045msec 00:32:58.489 00:32:58.489 Disk stats (read/write): 00:32:58.489 nvme0n1: ios=3216/3584, merge=0/0, ticks=39771/61640, in_queue=101411, util=98.00% 00:32:58.489 nvme0n2: ios=3634/3968, merge=0/0, ticks=23991/22020, in_queue=46011, util=98.38% 00:32:58.489 nvme0n3: ios=2257/2560, merge=0/0, ticks=12362/14943, in_queue=27305, util=98.23% 00:32:58.489 nvme0n4: ios=4143/4608, merge=0/0, ticks=20397/19424, in_queue=39821, util=97.27% 00:32:58.489 13:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:58.489 13:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3332842 00:32:58.489 13:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:58.489 13:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:58.489 [global] 00:32:58.489 thread=1 00:32:58.489 invalidate=1 00:32:58.489 rw=read 00:32:58.489 time_based=1 00:32:58.489 runtime=10 00:32:58.489 ioengine=libaio 00:32:58.489 direct=1 00:32:58.489 bs=4096 00:32:58.489 iodepth=1 00:32:58.489 norandommap=1 00:32:58.489 numjobs=1 00:32:58.489 00:32:58.489 [job0] 00:32:58.489 filename=/dev/nvme0n1 00:32:58.489 [job1] 00:32:58.489 filename=/dev/nvme0n2 00:32:58.489 [job2] 00:32:58.489 filename=/dev/nvme0n3 00:32:58.489 [job3] 00:32:58.489 filename=/dev/nvme0n4 00:32:58.489 Could not set queue depth (nvme0n1) 00:32:58.489 Could not set queue depth (nvme0n2) 00:32:58.489 Could not set queue depth (nvme0n3) 00:32:58.489 Could not set queue depth (nvme0n4) 00:32:58.489 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.489 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.489 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.489 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:58.489 fio-3.35 00:32:58.489 Starting 4 threads 00:33:01.768 13:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:01.768 13:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:01.768 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=31948800, buflen=4096 00:33:01.768 fio: pid=3332939, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:02.025 13:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.025 13:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:02.025 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=19730432, buflen=4096 00:33:02.025 fio: pid=3332938, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:02.283 13:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.283 13:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:02.283 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=348160, buflen=4096 00:33:02.283 fio: pid=3332936, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:02.542 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56594432, buflen=4096 00:33:02.542 fio: pid=3332937, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:02.542 13:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.542 13:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:02.542 00:33:02.542 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3332936: Mon Nov 25 13:32:00 2024 00:33:02.542 read: IOPS=24, BW=95.7KiB/s (98.0kB/s)(340KiB/3554msec) 00:33:02.542 slat (usec): min=9, max=15944, avg=341.42, stdev=2134.10 00:33:02.542 clat (usec): min=435, max=42117, avg=41115.26, stdev=4491.86 00:33:02.542 lat (usec): min=458, max=58030, avg=41460.49, stdev=5040.19 00:33:02.542 clat percentiles (usec): 00:33:02.542 | 1.00th=[ 437], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:02.542 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:02.542 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:02.542 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:02.542 | 99.99th=[42206] 00:33:02.542 bw ( KiB/s): min= 96, max= 104, per=0.35%, avg=97.33, stdev= 3.27, samples=6 00:33:02.542 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:33:02.542 lat (usec) : 500=1.16% 00:33:02.542 lat (msec) : 50=97.67% 00:33:02.542 cpu : usr=0.08%, sys=0.00%, ctx=88, majf=0, minf=2 00:33:02.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.542 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.542 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.542 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3332937: Mon Nov 25 13:32:00 2024 00:33:02.542 read: IOPS=3627, BW=14.2MiB/s (14.9MB/s)(54.0MiB/3809msec) 00:33:02.542 slat (usec): min=4, max=29263, avg=15.31, stdev=316.20 00:33:02.542 clat (usec): min=167, max=42070, avg=257.11, stdev=616.04 00:33:02.542 lat (usec): min=172, max=42088, avg=272.42, stdev=692.84 00:33:02.542 clat percentiles (usec): 00:33:02.542 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 231], 00:33:02.542 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:33:02.542 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:33:02.542 | 99.00th=[ 355], 99.50th=[ 441], 99.90th=[ 586], 99.95th=[ 898], 00:33:02.542 | 99.99th=[42206] 00:33:02.542 bw ( KiB/s): min=12776, max=16480, per=52.15%, avg=14523.86, stdev=1334.78, samples=7 00:33:02.542 iops : min= 3194, max= 4120, avg=3630.86, stdev=333.69, samples=7 00:33:02.542 lat (usec) : 250=62.74%, 500=36.97%, 750=0.22%, 1000=0.03% 00:33:02.542 lat (msec) : 2=0.01%, 50=0.02% 00:33:02.542 cpu : usr=1.94%, sys=5.54%, ctx=13825, majf=0, minf=2 00:33:02.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.542 issued rwts: total=13818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.542 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3332938: Mon Nov 25 13:32:00 2024 00:33:02.542 read: IOPS=1487, BW=5949KiB/s (6092kB/s)(18.8MiB/3239msec) 00:33:02.542 slat (nsec): min=4631, max=54854, avg=11384.41, stdev=5911.34 00:33:02.542 clat (usec): min=219, max=41298, avg=653.05, stdev=3733.45 00:33:02.542 lat (usec): min=225, max=41314, avg=664.44, stdev=3733.94 00:33:02.542 clat percentiles (usec): 00:33:02.542 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 255], 00:33:02.542 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 302], 00:33:02.542 | 70.00th=[ 314], 80.00th=[ 326], 90.00th=[ 379], 95.00th=[ 494], 00:33:02.542 | 99.00th=[ 881], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:02.542 | 99.99th=[41157] 00:33:02.542 bw ( KiB/s): min= 96, max=10744, per=19.06%, avg=5309.33, stdev=4621.30, samples=6 00:33:02.542 iops : min= 24, max= 2686, avg=1327.33, stdev=1155.33, samples=6 00:33:02.542 lat (usec) : 250=15.46%, 500=80.34%, 750=3.13%, 1000=0.10% 00:33:02.542 lat (msec) : 2=0.06%, 20=0.02%, 50=0.85% 00:33:02.542 cpu : usr=1.02%, sys=2.53%, ctx=4822, majf=0, minf=1 00:33:02.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.542 issued rwts: total=4818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.542 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3332939: Mon Nov 25 13:32:00 2024 00:33:02.542 read: IOPS=2643, BW=10.3MiB/s (10.8MB/s)(30.5MiB/2951msec) 00:33:02.542 slat (nsec): min=4393, max=77425, avg=12045.25, stdev=7846.61 00:33:02.542 clat (usec): min=204, max=41032, avg=360.45, stdev=1897.88 00:33:02.542 lat (usec): min=211, max=41048, avg=372.50, stdev=1898.06 00:33:02.542 clat percentiles (usec): 00:33:02.542 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:33:02.542 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:33:02.542 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 343], 00:33:02.542 | 99.00th=[ 433], 99.50th=[ 498], 99.90th=[41157], 99.95th=[41157], 00:33:02.542 | 99.99th=[41157] 00:33:02.542 bw ( KiB/s): min= 1592, max=14136, per=35.84%, avg=9982.40, stdev=5407.35, samples=5 00:33:02.542 iops : min= 398, max= 3534, avg=2495.60, stdev=1351.84, samples=5 00:33:02.542 lat (usec) : 250=26.70%, 500=72.79%, 750=0.22%, 1000=0.05% 00:33:02.542 lat (msec) : 2=0.01%, 50=0.22% 00:33:02.542 cpu : usr=1.36%, sys=3.69%, ctx=7802, majf=0, minf=1 00:33:02.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.542 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.542 issued rwts: total=7801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:02.542 00:33:02.542 Run status group 0 (all jobs): 00:33:02.542 READ: bw=27.2MiB/s (28.5MB/s), 95.7KiB/s-14.2MiB/s (98.0kB/s-14.9MB/s), io=104MiB (109MB), run=2951-3809msec 00:33:02.542 00:33:02.542 Disk stats (read/write): 00:33:02.542 nvme0n1: ios=81/0, merge=0/0, ticks=3329/0, in_queue=3329, util=95.31% 00:33:02.542 nvme0n2: ios=13122/0, merge=0/0, ticks=3864/0, in_queue=3864, util=98.63% 00:33:02.542 nvme0n3: ios=4437/0, merge=0/0, ticks=4002/0, in_queue=4002, util=99.81% 00:33:02.542 nvme0n4: ios=7581/0, merge=0/0, ticks=2982/0, in_queue=2982, util=100.00% 00:33:02.801 13:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:02.801 13:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:03.058 13:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:03.059 13:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:03.317 13:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:03.317 13:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:03.575 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:03.575 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:03.833 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:03.833 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3332842 00:33:03.833 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:03.833 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:04.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:04.091 nvmf hotplug test: fio failed as expected 00:33:04.091 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.349 rmmod nvme_tcp 00:33:04.349 rmmod nvme_fabrics 00:33:04.349 rmmod nvme_keyring 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3330940 ']' 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3330940 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3330940 ']' 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3330940 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3330940 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3330940' 00:33:04.349 killing process with pid 3330940 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3330940 00:33:04.349 13:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3330940 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.610 13:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:07.149 00:33:07.149 real 0m23.797s 00:33:07.149 user 1m7.031s 00:33:07.149 sys 0m10.150s 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:07.149 ************************************ 00:33:07.149 END TEST nvmf_fio_target 00:33:07.149 ************************************ 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:07.149 ************************************ 00:33:07.149 START TEST nvmf_bdevio 00:33:07.149 ************************************ 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:07.149 * Looking for test storage... 00:33:07.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.149 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.149 --rc genhtml_branch_coverage=1 00:33:07.149 --rc genhtml_function_coverage=1 00:33:07.149 --rc genhtml_legend=1 00:33:07.149 --rc geninfo_all_blocks=1 00:33:07.149 --rc geninfo_unexecuted_blocks=1 00:33:07.149 00:33:07.150 ' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:07.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.150 --rc genhtml_branch_coverage=1 00:33:07.150 --rc genhtml_function_coverage=1 00:33:07.150 --rc genhtml_legend=1 00:33:07.150 --rc geninfo_all_blocks=1 00:33:07.150 --rc geninfo_unexecuted_blocks=1 00:33:07.150 00:33:07.150 ' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:07.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.150 --rc genhtml_branch_coverage=1 00:33:07.150 --rc genhtml_function_coverage=1 00:33:07.150 --rc genhtml_legend=1 00:33:07.150 --rc geninfo_all_blocks=1 00:33:07.150 --rc geninfo_unexecuted_blocks=1 00:33:07.150 00:33:07.150 ' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:07.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.150 --rc genhtml_branch_coverage=1 00:33:07.150 --rc genhtml_function_coverage=1 00:33:07.150 --rc genhtml_legend=1 00:33:07.150 --rc geninfo_all_blocks=1 00:33:07.150 --rc geninfo_unexecuted_blocks=1 00:33:07.150 00:33:07.150 ' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.150 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:07.151 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:07.151 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:07.151 13:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.056 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:09.056 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:09.056 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:09.056 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:09.056 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:09.056 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:09.057 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:09.057 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:09.057 Found net devices under 0000:09:00.0: cvl_0_0 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:09.057 Found net devices under 0000:09:00.1: cvl_0_1 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:09.057 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:09.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:09.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:33:09.058 00:33:09.058 --- 10.0.0.2 ping statistics --- 00:33:09.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.058 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:09.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:09.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:33:09.058 00:33:09.058 --- 10.0.0.1 ping statistics --- 00:33:09.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:09.058 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3335673 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3335673 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3335673 ']' 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:09.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:09.058 13:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.366 [2024-11-25 13:32:06.742307] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:09.366 [2024-11-25 13:32:06.743370] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:33:09.366 [2024-11-25 13:32:06.743442] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:09.366 [2024-11-25 13:32:06.815844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:09.366 [2024-11-25 13:32:06.877666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:09.366 [2024-11-25 13:32:06.877719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:09.366 [2024-11-25 13:32:06.877732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:09.366 [2024-11-25 13:32:06.877743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:09.366 [2024-11-25 13:32:06.877752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:09.366 [2024-11-25 13:32:06.879355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:09.366 [2024-11-25 13:32:06.879475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:09.366 [2024-11-25 13:32:06.879525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:09.366 [2024-11-25 13:32:06.879529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:09.366 [2024-11-25 13:32:06.980067] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:09.366 [2024-11-25 13:32:06.980285] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:09.366 [2024-11-25 13:32:06.980598] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:09.366 [2024-11-25 13:32:06.981322] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:09.366 [2024-11-25 13:32:06.981576] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.632 [2024-11-25 13:32:07.036264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:09.632 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.633 Malloc0 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:09.633 [2024-11-25 13:32:07.112542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:09.633 { 00:33:09.633 "params": { 00:33:09.633 "name": "Nvme$subsystem", 00:33:09.633 "trtype": "$TEST_TRANSPORT", 00:33:09.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:09.633 "adrfam": "ipv4", 00:33:09.633 "trsvcid": "$NVMF_PORT", 00:33:09.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:09.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:09.633 "hdgst": ${hdgst:-false}, 00:33:09.633 "ddgst": ${ddgst:-false} 00:33:09.633 }, 00:33:09.633 "method": "bdev_nvme_attach_controller" 00:33:09.633 } 00:33:09.633 EOF 00:33:09.633 )") 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:09.633 13:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:09.633 "params": { 00:33:09.633 "name": "Nvme1", 00:33:09.633 "trtype": "tcp", 00:33:09.633 "traddr": "10.0.0.2", 00:33:09.633 "adrfam": "ipv4", 00:33:09.633 "trsvcid": "4420", 00:33:09.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:09.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:09.633 "hdgst": false, 00:33:09.633 "ddgst": false 00:33:09.633 }, 00:33:09.633 "method": "bdev_nvme_attach_controller" 00:33:09.633 }' 00:33:09.633 [2024-11-25 13:32:07.164615] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:33:09.633 [2024-11-25 13:32:07.164709] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335706 ] 00:33:09.633 [2024-11-25 13:32:07.236214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:09.891 [2024-11-25 13:32:07.302522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:09.891 [2024-11-25 13:32:07.302572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:09.891 [2024-11-25 13:32:07.302575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.891 I/O targets: 00:33:09.891 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:09.891 00:33:09.891 00:33:09.891 CUnit - A unit testing framework for C - Version 2.1-3 00:33:09.891 http://cunit.sourceforge.net/ 00:33:09.891 00:33:09.891 00:33:09.891 Suite: bdevio tests on: Nvme1n1 00:33:10.148 Test: blockdev write read block ...passed 00:33:10.149 Test: blockdev write zeroes read block ...passed 00:33:10.149 Test: blockdev write zeroes read no split ...passed 00:33:10.149 Test: blockdev write zeroes read split ...passed 00:33:10.149 Test: blockdev write zeroes read split partial ...passed 00:33:10.149 Test: blockdev reset ...[2024-11-25 13:32:07.637105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:10.149 [2024-11-25 13:32:07.637208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x591680 (9): Bad file descriptor 00:33:10.149 [2024-11-25 13:32:07.730506] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:10.149 passed 00:33:10.149 Test: blockdev write read 8 blocks ...passed 00:33:10.149 Test: blockdev write read size > 128k ...passed 00:33:10.149 Test: blockdev write read invalid size ...passed 00:33:10.149 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:10.149 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:10.149 Test: blockdev write read max offset ...passed 00:33:10.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:10.406 Test: blockdev writev readv 8 blocks ...passed 00:33:10.406 Test: blockdev writev readv 30 x 1block ...passed 00:33:10.406 Test: blockdev writev readv block ...passed 00:33:10.406 Test: blockdev writev readv size > 128k ...passed 00:33:10.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:10.406 Test: blockdev comparev and writev ...[2024-11-25 13:32:07.902557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:10.406 [2024-11-25 13:32:07.902594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.406 [2024-11-25 13:32:07.902619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:10.406 [2024-11-25 13:32:07.902635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:10.406 [2024-11-25 13:32:07.903039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:10.406 [2024-11-25 13:32:07.903066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:10.406 [2024-11-25 13:32:07.903089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:10.407 [2024-11-25 13:32:07.903105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:10.407 [2024-11-25 13:32:07.903515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:10.407 [2024-11-25 13:32:07.903540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:10.407 [2024-11-25 13:32:07.903562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:10.407 [2024-11-25 13:32:07.903578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:10.407 [2024-11-25 13:32:07.903966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:10.407 [2024-11-25 13:32:07.903990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:10.407 [2024-11-25 13:32:07.904010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:10.407 [2024-11-25 13:32:07.904026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:10.407 passed 00:33:10.407 Test: blockdev nvme passthru rw ...passed 00:33:10.407 Test: blockdev nvme passthru vendor specific ...[2024-11-25 13:32:07.986572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:10.407 [2024-11-25 13:32:07.986599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:10.407 [2024-11-25 13:32:07.986750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:10.407 [2024-11-25 13:32:07.986774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:10.407 [2024-11-25 13:32:07.986927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:10.407 [2024-11-25 13:32:07.986951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:10.407 [2024-11-25 13:32:07.987109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:10.407 [2024-11-25 13:32:07.987133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:10.407 passed 00:33:10.407 Test: blockdev nvme admin passthru ...passed 00:33:10.407 Test: blockdev copy ...passed 00:33:10.407 00:33:10.407 Run Summary: Type Total Ran Passed Failed Inactive 00:33:10.407 suites 1 1 n/a 0 0 00:33:10.407 tests 23 23 23 0 0 00:33:10.407 asserts 152 152 152 0 n/a 00:33:10.407 00:33:10.407 Elapsed time = 1.026 seconds 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:10.665 rmmod nvme_tcp 00:33:10.665 rmmod nvme_fabrics 00:33:10.665 rmmod nvme_keyring 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3335673 ']' 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3335673 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3335673 ']' 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3335673 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3335673 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3335673' 00:33:10.665 killing process with pid 3335673 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3335673 00:33:10.665 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3335673 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:10.924 13:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.456 13:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:13.456 00:33:13.456 real 0m6.322s 00:33:13.456 user 0m7.983s 00:33:13.456 sys 0m2.518s 00:33:13.456 13:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.456 13:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:13.456 ************************************ 00:33:13.457 END TEST nvmf_bdevio 00:33:13.457 ************************************ 00:33:13.457 13:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:13.457 00:33:13.457 real 3m55.163s 00:33:13.457 user 8m56.609s 00:33:13.457 sys 1m23.443s 00:33:13.457 13:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.457 13:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:13.457 ************************************ 00:33:13.457 END TEST nvmf_target_core_interrupt_mode 00:33:13.457 ************************************ 00:33:13.457 13:32:10 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:13.457 13:32:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:13.457 13:32:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:13.457 13:32:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:13.457 ************************************ 00:33:13.457 START TEST nvmf_interrupt 00:33:13.457 ************************************ 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:13.457 * Looking for test storage... 00:33:13.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:13.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.457 --rc genhtml_branch_coverage=1 00:33:13.457 --rc genhtml_function_coverage=1 00:33:13.457 --rc genhtml_legend=1 00:33:13.457 --rc geninfo_all_blocks=1 00:33:13.457 --rc geninfo_unexecuted_blocks=1 00:33:13.457 00:33:13.457 ' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:13.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.457 --rc genhtml_branch_coverage=1 00:33:13.457 --rc genhtml_function_coverage=1 00:33:13.457 --rc genhtml_legend=1 00:33:13.457 --rc geninfo_all_blocks=1 00:33:13.457 --rc geninfo_unexecuted_blocks=1 00:33:13.457 00:33:13.457 ' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:13.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.457 --rc genhtml_branch_coverage=1 00:33:13.457 --rc genhtml_function_coverage=1 00:33:13.457 --rc genhtml_legend=1 00:33:13.457 --rc geninfo_all_blocks=1 00:33:13.457 --rc geninfo_unexecuted_blocks=1 00:33:13.457 00:33:13.457 ' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:13.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:13.457 --rc genhtml_branch_coverage=1 00:33:13.457 --rc genhtml_function_coverage=1 00:33:13.457 --rc genhtml_legend=1 00:33:13.457 --rc geninfo_all_blocks=1 00:33:13.457 --rc geninfo_unexecuted_blocks=1 00:33:13.457 00:33:13.457 ' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:13.457 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:13.458 13:32:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:15.359 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:15.359 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:15.359 Found net devices under 0000:09:00.0: cvl_0_0 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:15.359 Found net devices under 0000:09:00.1: cvl_0_1 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:15.359 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:15.360 13:32:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:15.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:33:15.617 00:33:15.617 --- 10.0.0.2 ping statistics --- 00:33:15.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.617 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:15.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:33:15.617 00:33:15.617 --- 10.0.0.1 ping statistics --- 00:33:15.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.617 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:15.617 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3337794 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3337794 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3337794 ']' 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.618 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.618 [2024-11-25 13:32:13.135470] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:15.618 [2024-11-25 13:32:13.136760] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:33:15.618 [2024-11-25 13:32:13.136819] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.618 [2024-11-25 13:32:13.219625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:15.875 [2024-11-25 13:32:13.280007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.876 [2024-11-25 13:32:13.280058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.876 [2024-11-25 13:32:13.280071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:15.876 [2024-11-25 13:32:13.280082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:15.876 [2024-11-25 13:32:13.280092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.876 [2024-11-25 13:32:13.281534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.876 [2024-11-25 13:32:13.281539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.876 [2024-11-25 13:32:13.377495] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:15.876 [2024-11-25 13:32:13.377530] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:15.876 [2024-11-25 13:32:13.377782] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:15.876 5000+0 records in 00:33:15.876 5000+0 records out 00:33:15.876 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0146472 s, 699 MB/s 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.876 AIO0 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.876 [2024-11-25 13:32:13.494162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:15.876 [2024-11-25 13:32:13.518428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3337794 0 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3337794 0 idle 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3337794 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:15.876 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337794 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.29 reactor_0' 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337794 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.29 reactor_0 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3337794 1 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3337794 1 idle 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3337794 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:16.135 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337804 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337804 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3337954 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3337794 0 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3337794 0 busy 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3337794 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:16.393 13:32:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337794 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.30 reactor_0' 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337794 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.30 reactor_0 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:16.393 13:32:14 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337794 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.58 reactor_0' 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337794 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.58 reactor_0 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3337794 1 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3337794 1 busy 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3337794 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337804 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.31 reactor_1' 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337804 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.31 reactor_1 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:17.766 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:17.767 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:17.767 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:17.767 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:17.767 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:17.767 13:32:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:17.767 13:32:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3337954 00:33:27.728 Initializing NVMe Controllers 00:33:27.728 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:27.728 Controller IO queue size 256, less than required. 00:33:27.728 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:27.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:27.728 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:27.728 Initialization complete. Launching workers. 00:33:27.728 ======================================================== 00:33:27.728 Latency(us) 00:33:27.728 Device Information : IOPS MiB/s Average min max 00:33:27.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13756.90 53.74 18620.68 4435.84 23330.99 00:33:27.728 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13682.50 53.45 18723.30 4035.44 22891.42 00:33:27.728 ======================================================== 00:33:27.728 Total : 27439.39 107.19 18671.85 4035.44 23330.99 00:33:27.728 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3337794 0 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3337794 0 idle 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3337794 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337794 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.25 reactor_0' 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337794 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.25 reactor_0 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:27.728 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3337794 1 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3337794 1 idle 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3337794 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337804 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1' 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337804 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:27.729 13:32:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:29.103 13:32:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:29.104 13:32:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:29.104 13:32:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3337794 0 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3337794 0 idle 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3337794 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337794 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.34 reactor_0' 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337794 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.34 reactor_0 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3337794 1 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3337794 1 idle 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3337794 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3337794 -w 256 00:33:29.361 13:32:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3337804 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3337804 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:29.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:29.619 rmmod nvme_tcp 00:33:29.619 rmmod nvme_fabrics 00:33:29.619 rmmod nvme_keyring 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3337794 ']' 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3337794 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3337794 ']' 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3337794 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:29.619 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3337794 00:33:29.876 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:29.876 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:29.876 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3337794' 00:33:29.876 killing process with pid 3337794 00:33:29.876 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3337794 00:33:29.876 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3337794 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:30.135 13:32:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.039 13:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:32.039 00:33:32.039 real 0m18.965s 00:33:32.039 user 0m36.890s 00:33:32.039 sys 0m6.794s 00:33:32.039 13:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.039 13:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:32.039 ************************************ 00:33:32.039 END TEST nvmf_interrupt 00:33:32.039 ************************************ 00:33:32.039 00:33:32.039 real 24m52.405s 00:33:32.039 user 58m17.863s 00:33:32.039 sys 6m44.107s 00:33:32.039 13:32:29 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:32.039 13:32:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.039 ************************************ 00:33:32.039 END TEST nvmf_tcp 00:33:32.039 ************************************ 00:33:32.039 13:32:29 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:32.039 13:32:29 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:32.039 13:32:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:32.039 13:32:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:32.039 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:33:32.298 ************************************ 00:33:32.298 START TEST spdkcli_nvmf_tcp 00:33:32.298 ************************************ 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:32.298 * Looking for test storage... 00:33:32.298 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:32.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.298 --rc genhtml_branch_coverage=1 00:33:32.298 --rc genhtml_function_coverage=1 00:33:32.298 --rc genhtml_legend=1 00:33:32.298 --rc geninfo_all_blocks=1 00:33:32.298 --rc geninfo_unexecuted_blocks=1 00:33:32.298 00:33:32.298 ' 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:32.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.298 --rc genhtml_branch_coverage=1 00:33:32.298 --rc genhtml_function_coverage=1 00:33:32.298 --rc genhtml_legend=1 00:33:32.298 --rc geninfo_all_blocks=1 00:33:32.298 --rc geninfo_unexecuted_blocks=1 00:33:32.298 00:33:32.298 ' 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:32.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.298 --rc genhtml_branch_coverage=1 00:33:32.298 --rc genhtml_function_coverage=1 00:33:32.298 --rc genhtml_legend=1 00:33:32.298 --rc geninfo_all_blocks=1 00:33:32.298 --rc geninfo_unexecuted_blocks=1 00:33:32.298 00:33:32.298 ' 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:32.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.298 --rc genhtml_branch_coverage=1 00:33:32.298 --rc genhtml_function_coverage=1 00:33:32.298 --rc genhtml_legend=1 00:33:32.298 --rc geninfo_all_blocks=1 00:33:32.298 --rc geninfo_unexecuted_blocks=1 00:33:32.298 00:33:32.298 ' 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.298 13:32:29 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:32.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3339970 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3339970 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3339970 ']' 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.299 13:32:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.299 [2024-11-25 13:32:29.900219] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:33:32.299 [2024-11-25 13:32:29.900320] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339970 ] 00:33:32.556 [2024-11-25 13:32:29.968387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:32.556 [2024-11-25 13:32:30.032078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.556 [2024-11-25 13:32:30.032082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.556 13:32:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:32.556 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:32.556 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:32.556 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:32.556 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:32.556 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:32.556 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:32.556 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.556 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.556 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:32.556 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:32.556 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:32.556 ' 00:33:35.836 [2024-11-25 13:32:32.808994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:36.767 [2024-11-25 13:32:34.085562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:39.297 [2024-11-25 13:32:36.428658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:41.243 [2024-11-25 13:32:38.443067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:42.613 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:42.613 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:42.613 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:42.613 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:42.613 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:42.613 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:42.613 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:42.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:42.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:42.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:42.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:42.613 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:42.613 13:32:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:42.613 13:32:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:42.613 13:32:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.613 13:32:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:42.613 13:32:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:42.613 13:32:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:42.613 13:32:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:42.613 13:32:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:43.178 13:32:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:43.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:43.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:43.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:43.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:43.178 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:43.178 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:43.178 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:43.178 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:43.178 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:43.178 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:43.178 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:43.178 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:43.178 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:43.178 ' 00:33:48.434 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:48.434 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:48.434 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:48.434 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:48.434 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:48.434 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:48.434 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:48.434 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:48.434 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:48.434 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:48.434 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:48.434 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:48.434 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:48.434 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3339970 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3339970 ']' 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3339970 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.434 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339970 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339970' 00:33:48.692 killing process with pid 3339970 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3339970 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3339970 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3339970 ']' 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3339970 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3339970 ']' 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3339970 00:33:48.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3339970) - No such process 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3339970 is not found' 00:33:48.692 Process with pid 3339970 is not found 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:48.692 00:33:48.692 real 0m16.628s 00:33:48.692 user 0m35.433s 00:33:48.692 sys 0m0.767s 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.692 13:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:48.692 ************************************ 00:33:48.692 END TEST spdkcli_nvmf_tcp 00:33:48.692 ************************************ 00:33:48.692 13:32:46 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:48.692 13:32:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:48.692 13:32:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.692 13:32:46 -- common/autotest_common.sh@10 -- # set +x 00:33:48.950 ************************************ 00:33:48.950 START TEST nvmf_identify_passthru 00:33:48.950 ************************************ 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:48.950 * Looking for test storage... 00:33:48.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.950 13:32:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:48.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.950 --rc genhtml_branch_coverage=1 00:33:48.950 --rc genhtml_function_coverage=1 00:33:48.950 --rc genhtml_legend=1 00:33:48.950 --rc geninfo_all_blocks=1 00:33:48.950 --rc geninfo_unexecuted_blocks=1 00:33:48.950 00:33:48.950 ' 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:48.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.950 --rc genhtml_branch_coverage=1 00:33:48.950 --rc genhtml_function_coverage=1 00:33:48.950 --rc genhtml_legend=1 00:33:48.950 --rc geninfo_all_blocks=1 00:33:48.950 --rc geninfo_unexecuted_blocks=1 00:33:48.950 00:33:48.950 ' 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:48.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.950 --rc genhtml_branch_coverage=1 00:33:48.950 --rc genhtml_function_coverage=1 00:33:48.950 --rc genhtml_legend=1 00:33:48.950 --rc geninfo_all_blocks=1 00:33:48.950 --rc geninfo_unexecuted_blocks=1 00:33:48.950 00:33:48.950 ' 00:33:48.950 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:48.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.950 --rc genhtml_branch_coverage=1 00:33:48.950 --rc genhtml_function_coverage=1 00:33:48.950 --rc genhtml_legend=1 00:33:48.950 --rc geninfo_all_blocks=1 00:33:48.950 --rc geninfo_unexecuted_blocks=1 00:33:48.950 00:33:48.950 ' 00:33:48.950 13:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.950 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.951 13:32:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.951 13:32:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.951 13:32:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.951 13:32:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:48.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:48.951 13:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.951 13:32:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.951 13:32:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.951 13:32:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.951 13:32:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:48.951 13:32:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.951 13:32:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.951 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:48.951 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:48.951 13:32:46 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:48.951 13:32:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:51.479 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:51.480 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:51.480 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:51.480 Found net devices under 0000:09:00.0: cvl_0_0 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:51.480 Found net devices under 0000:09:00.1: cvl_0_1 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:51.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:51.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:33:51.480 00:33:51.480 --- 10.0.0.2 ping statistics --- 00:33:51.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.480 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:51.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:51.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:33:51.480 00:33:51.480 --- 10.0.0.1 ping statistics --- 00:33:51.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:51.480 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:51.480 13:32:48 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:51.480 13:32:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:51.480 13:32:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:33:51.480 13:32:48 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:0b:00.0 00:33:51.480 13:32:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:33:51.480 13:32:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:33:51.480 13:32:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:33:51.480 13:32:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:51.480 13:32:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:55.666 13:32:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:33:55.666 13:32:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:33:55.666 13:32:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:55.666 13:32:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3344491 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3344491 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3344491 ']' 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.847 [2024-11-25 13:32:57.094333] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:33:59.847 [2024-11-25 13:32:57.094411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.847 [2024-11-25 13:32:57.168955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:59.847 [2024-11-25 13:32:57.228059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.847 [2024-11-25 13:32:57.228107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.847 [2024-11-25 13:32:57.228130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.847 [2024-11-25 13:32:57.228141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.847 [2024-11-25 13:32:57.228151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.847 [2024-11-25 13:32:57.229622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.847 [2024-11-25 13:32:57.229682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.847 [2024-11-25 13:32:57.229748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.847 [2024-11-25 13:32:57.229751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.847 INFO: Log level set to 20 00:33:59.847 INFO: Requests: 00:33:59.847 { 00:33:59.847 "jsonrpc": "2.0", 00:33:59.847 "method": "nvmf_set_config", 00:33:59.847 "id": 1, 00:33:59.847 "params": { 00:33:59.847 "admin_cmd_passthru": { 00:33:59.847 "identify_ctrlr": true 00:33:59.847 } 00:33:59.847 } 00:33:59.847 } 00:33:59.847 00:33:59.847 INFO: response: 00:33:59.847 { 00:33:59.847 "jsonrpc": "2.0", 00:33:59.847 "id": 1, 00:33:59.847 "result": true 00:33:59.847 } 00:33:59.847 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.847 INFO: Setting log level to 20 00:33:59.847 INFO: Setting log level to 20 00:33:59.847 INFO: Log level set to 20 00:33:59.847 INFO: Log level set to 20 00:33:59.847 INFO: Requests: 00:33:59.847 { 00:33:59.847 "jsonrpc": "2.0", 00:33:59.847 "method": "framework_start_init", 00:33:59.847 "id": 1 00:33:59.847 } 00:33:59.847 00:33:59.847 INFO: Requests: 00:33:59.847 { 00:33:59.847 "jsonrpc": "2.0", 00:33:59.847 "method": "framework_start_init", 00:33:59.847 "id": 1 00:33:59.847 } 00:33:59.847 00:33:59.847 [2024-11-25 13:32:57.430353] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:59.847 INFO: response: 00:33:59.847 { 00:33:59.847 "jsonrpc": "2.0", 00:33:59.847 "id": 1, 00:33:59.847 "result": true 00:33:59.847 } 00:33:59.847 00:33:59.847 INFO: response: 00:33:59.847 { 00:33:59.847 "jsonrpc": "2.0", 00:33:59.847 "id": 1, 00:33:59.847 "result": true 00:33:59.847 } 00:33:59.847 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.847 INFO: Setting log level to 40 00:33:59.847 INFO: Setting log level to 40 00:33:59.847 INFO: Setting log level to 40 00:33:59.847 [2024-11-25 13:32:57.440486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:59.847 13:32:57 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.847 13:32:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.121 Nvme0n1 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.121 [2024-11-25 13:33:00.344128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.121 [ 00:34:03.121 { 00:34:03.121 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:03.121 "subtype": "Discovery", 00:34:03.121 "listen_addresses": [], 00:34:03.121 "allow_any_host": true, 00:34:03.121 "hosts": [] 00:34:03.121 }, 00:34:03.121 { 00:34:03.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:03.121 "subtype": "NVMe", 00:34:03.121 "listen_addresses": [ 00:34:03.121 { 00:34:03.121 "trtype": "TCP", 00:34:03.121 "adrfam": "IPv4", 00:34:03.121 "traddr": "10.0.0.2", 00:34:03.121 "trsvcid": "4420" 00:34:03.121 } 00:34:03.121 ], 00:34:03.121 "allow_any_host": true, 00:34:03.121 "hosts": [], 00:34:03.121 "serial_number": "SPDK00000000000001", 00:34:03.121 "model_number": "SPDK bdev Controller", 00:34:03.121 "max_namespaces": 1, 00:34:03.121 "min_cntlid": 1, 00:34:03.121 "max_cntlid": 65519, 00:34:03.121 "namespaces": [ 00:34:03.121 { 00:34:03.121 "nsid": 1, 00:34:03.121 "bdev_name": "Nvme0n1", 00:34:03.121 "name": "Nvme0n1", 00:34:03.121 "nguid": "0EBBA7FB71B24A1FBBC89B06F7321E4C", 00:34:03.121 "uuid": "0ebba7fb-71b2-4a1f-bbc8-9b06f7321e4c" 00:34:03.121 } 00:34:03.121 ] 00:34:03.121 } 00:34:03.121 ] 00:34:03.121 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:03.121 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:03.378 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:03.378 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:34:03.378 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:03.378 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:03.378 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.378 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:03.378 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.378 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:03.378 13:33:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:03.378 rmmod nvme_tcp 00:34:03.378 rmmod nvme_fabrics 00:34:03.378 rmmod nvme_keyring 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3344491 ']' 00:34:03.378 13:33:00 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3344491 00:34:03.378 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3344491 ']' 00:34:03.378 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3344491 00:34:03.378 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:03.378 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.378 13:33:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3344491 00:34:03.378 13:33:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.378 13:33:01 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.378 13:33:01 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3344491' 00:34:03.378 killing process with pid 3344491 00:34:03.378 13:33:01 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3344491 00:34:03.378 13:33:01 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3344491 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.272 13:33:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.272 13:33:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:05.272 13:33:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.172 13:33:04 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:07.172 00:34:07.172 real 0m18.217s 00:34:07.172 user 0m26.838s 00:34:07.172 sys 0m3.179s 00:34:07.172 13:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:07.172 13:33:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:07.172 ************************************ 00:34:07.172 END TEST nvmf_identify_passthru 00:34:07.172 ************************************ 00:34:07.172 13:33:04 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:07.172 13:33:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:07.172 13:33:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.172 13:33:04 -- common/autotest_common.sh@10 -- # set +x 00:34:07.172 ************************************ 00:34:07.172 START TEST nvmf_dif 00:34:07.172 ************************************ 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:07.172 * Looking for test storage... 00:34:07.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.172 --rc genhtml_branch_coverage=1 00:34:07.172 --rc genhtml_function_coverage=1 00:34:07.172 --rc genhtml_legend=1 00:34:07.172 --rc geninfo_all_blocks=1 00:34:07.172 --rc geninfo_unexecuted_blocks=1 00:34:07.172 00:34:07.172 ' 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.172 --rc genhtml_branch_coverage=1 00:34:07.172 --rc genhtml_function_coverage=1 00:34:07.172 --rc genhtml_legend=1 00:34:07.172 --rc geninfo_all_blocks=1 00:34:07.172 --rc geninfo_unexecuted_blocks=1 00:34:07.172 00:34:07.172 ' 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.172 --rc genhtml_branch_coverage=1 00:34:07.172 --rc genhtml_function_coverage=1 00:34:07.172 --rc genhtml_legend=1 00:34:07.172 --rc geninfo_all_blocks=1 00:34:07.172 --rc geninfo_unexecuted_blocks=1 00:34:07.172 00:34:07.172 ' 00:34:07.172 13:33:04 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:07.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.172 --rc genhtml_branch_coverage=1 00:34:07.172 --rc genhtml_function_coverage=1 00:34:07.172 --rc genhtml_legend=1 00:34:07.172 --rc geninfo_all_blocks=1 00:34:07.172 --rc geninfo_unexecuted_blocks=1 00:34:07.172 00:34:07.172 ' 00:34:07.172 13:33:04 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.172 13:33:04 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.172 13:33:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.173 13:33:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.173 13:33:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.173 13:33:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.173 13:33:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:07.173 13:33:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:07.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.173 13:33:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:07.173 13:33:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:07.173 13:33:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:07.173 13:33:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:07.173 13:33:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.173 13:33:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:07.173 13:33:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:07.173 13:33:04 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.173 13:33:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.704 13:33:06 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.704 13:33:06 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.704 13:33:06 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.704 13:33:06 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.704 13:33:06 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:09.705 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:09.705 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:09.705 Found net devices under 0000:09:00.0: cvl_0_0 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:09.705 Found net devices under 0000:09:00.1: cvl_0_1 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.705 13:33:06 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:34:09.705 00:34:09.705 --- 10.0.0.2 ping statistics --- 00:34:09.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.705 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:34:09.705 00:34:09.705 --- 10.0.0.1 ping statistics --- 00:34:09.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.705 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:09.705 13:33:07 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:10.638 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:10.638 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:10.638 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:10.638 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:10.638 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:10.638 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:10.638 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:10.638 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:10.638 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:10.638 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:10.638 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:10.638 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:10.638 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:10.638 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:10.638 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:10.638 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:10.638 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.925 13:33:08 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:10.925 13:33:08 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:10.925 13:33:08 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.925 13:33:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3348259 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:10.925 13:33:08 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3348259 00:34:10.925 13:33:08 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3348259 ']' 00:34:10.925 13:33:08 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.925 13:33:08 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.925 13:33:08 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.925 13:33:08 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.925 13:33:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.925 [2024-11-25 13:33:08.386616] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:34:10.925 [2024-11-25 13:33:08.386737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.925 [2024-11-25 13:33:08.466680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.925 [2024-11-25 13:33:08.526981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.925 [2024-11-25 13:33:08.527031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.925 [2024-11-25 13:33:08.527054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.925 [2024-11-25 13:33:08.527065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.925 [2024-11-25 13:33:08.527074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.925 [2024-11-25 13:33:08.527703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:11.207 13:33:08 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:11.207 13:33:08 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.207 13:33:08 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:11.207 13:33:08 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:11.207 [2024-11-25 13:33:08.678631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.207 13:33:08 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.207 13:33:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:11.207 ************************************ 00:34:11.207 START TEST fio_dif_1_default 00:34:11.207 ************************************ 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:11.207 bdev_null0 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:11.207 [2024-11-25 13:33:08.734883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:11.207 { 00:34:11.207 "params": { 00:34:11.207 "name": "Nvme$subsystem", 00:34:11.207 "trtype": "$TEST_TRANSPORT", 00:34:11.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.207 "adrfam": "ipv4", 00:34:11.207 "trsvcid": "$NVMF_PORT", 00:34:11.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.207 "hdgst": ${hdgst:-false}, 00:34:11.207 "ddgst": ${ddgst:-false} 00:34:11.207 }, 00:34:11.207 "method": "bdev_nvme_attach_controller" 00:34:11.207 } 00:34:11.207 EOF 00:34:11.207 )") 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:11.207 13:33:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:11.208 "params": { 00:34:11.208 "name": "Nvme0", 00:34:11.208 "trtype": "tcp", 00:34:11.208 "traddr": "10.0.0.2", 00:34:11.208 "adrfam": "ipv4", 00:34:11.208 "trsvcid": "4420", 00:34:11.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:11.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:11.208 "hdgst": false, 00:34:11.208 "ddgst": false 00:34:11.208 }, 00:34:11.208 "method": "bdev_nvme_attach_controller" 00:34:11.208 }' 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:11.208 13:33:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.465 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:11.465 fio-3.35 00:34:11.465 Starting 1 thread 00:34:23.657 00:34:23.657 filename0: (groupid=0, jobs=1): err= 0: pid=3348625: Mon Nov 25 13:33:19 2024 00:34:23.657 read: IOPS=99, BW=396KiB/s (406kB/s)(3968KiB/10013msec) 00:34:23.657 slat (nsec): min=3905, max=40447, avg=9383.58, stdev=2683.97 00:34:23.657 clat (usec): min=534, max=45290, avg=40342.70, stdev=5095.99 00:34:23.657 lat (usec): min=542, max=45303, avg=40352.08, stdev=5095.77 00:34:23.657 clat percentiles (usec): 00:34:23.657 | 1.00th=[ 611], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:23.657 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:23.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:23.657 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:34:23.657 | 99.99th=[45351] 00:34:23.657 bw ( KiB/s): min= 384, max= 448, per=99.68%, avg=395.20, stdev=21.47, samples=20 00:34:23.657 iops : min= 96, max= 112, avg=98.80, stdev= 5.37, samples=20 00:34:23.657 lat (usec) : 750=1.61% 00:34:23.657 lat (msec) : 50=98.39% 00:34:23.657 cpu : usr=90.84%, sys=8.87%, ctx=22, majf=0, minf=172 00:34:23.657 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.657 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.657 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:23.657 00:34:23.657 Run status group 0 (all jobs): 00:34:23.657 READ: bw=396KiB/s (406kB/s), 396KiB/s-396KiB/s (406kB/s-406kB/s), io=3968KiB (4063kB), run=10013-10013msec 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.657 00:34:23.657 real 0m11.239s 00:34:23.657 user 0m10.203s 00:34:23.657 sys 0m1.168s 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.657 ************************************ 00:34:23.657 END TEST fio_dif_1_default 00:34:23.657 ************************************ 00:34:23.657 13:33:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:23.657 13:33:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:23.657 13:33:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.657 13:33:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.657 ************************************ 00:34:23.657 START TEST fio_dif_1_multi_subsystems 00:34:23.657 ************************************ 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:23.657 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.658 13:33:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 bdev_null0 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 [2024-11-25 13:33:20.025625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 bdev_null1 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:23.658 { 00:34:23.658 "params": { 00:34:23.658 "name": "Nvme$subsystem", 00:34:23.658 "trtype": "$TEST_TRANSPORT", 00:34:23.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.658 "adrfam": "ipv4", 00:34:23.658 "trsvcid": "$NVMF_PORT", 00:34:23.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.658 "hdgst": ${hdgst:-false}, 00:34:23.658 "ddgst": ${ddgst:-false} 00:34:23.658 }, 00:34:23.658 "method": "bdev_nvme_attach_controller" 00:34:23.658 } 00:34:23.658 EOF 00:34:23.658 )") 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:23.658 { 00:34:23.658 "params": { 00:34:23.658 "name": "Nvme$subsystem", 00:34:23.658 "trtype": "$TEST_TRANSPORT", 00:34:23.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.658 "adrfam": "ipv4", 00:34:23.658 "trsvcid": "$NVMF_PORT", 00:34:23.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.658 "hdgst": ${hdgst:-false}, 00:34:23.658 "ddgst": ${ddgst:-false} 00:34:23.658 }, 00:34:23.658 "method": "bdev_nvme_attach_controller" 00:34:23.658 } 00:34:23.658 EOF 00:34:23.658 )") 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:23.658 "params": { 00:34:23.658 "name": "Nvme0", 00:34:23.658 "trtype": "tcp", 00:34:23.658 "traddr": "10.0.0.2", 00:34:23.658 "adrfam": "ipv4", 00:34:23.658 "trsvcid": "4420", 00:34:23.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.658 "hdgst": false, 00:34:23.658 "ddgst": false 00:34:23.658 }, 00:34:23.658 "method": "bdev_nvme_attach_controller" 00:34:23.658 },{ 00:34:23.658 "params": { 00:34:23.658 "name": "Nvme1", 00:34:23.658 "trtype": "tcp", 00:34:23.658 "traddr": "10.0.0.2", 00:34:23.658 "adrfam": "ipv4", 00:34:23.658 "trsvcid": "4420", 00:34:23.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:23.658 "hdgst": false, 00:34:23.658 "ddgst": false 00:34:23.658 }, 00:34:23.658 "method": "bdev_nvme_attach_controller" 00:34:23.658 }' 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:23.658 13:33:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.658 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:23.658 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:23.658 fio-3.35 00:34:23.658 Starting 2 threads 00:34:33.621 00:34:33.621 filename0: (groupid=0, jobs=1): err= 0: pid=3350134: Mon Nov 25 13:33:31 2024 00:34:33.621 read: IOPS=198, BW=793KiB/s (812kB/s)(7952KiB/10026msec) 00:34:33.621 slat (nsec): min=6906, max=72323, avg=8895.90, stdev=3525.39 00:34:33.622 clat (usec): min=535, max=43835, avg=20145.19, stdev=20336.88 00:34:33.622 lat (usec): min=542, max=43875, avg=20154.09, stdev=20336.70 00:34:33.622 clat percentiles (usec): 00:34:33.622 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 611], 20.00th=[ 635], 00:34:33.622 | 30.00th=[ 660], 40.00th=[ 685], 50.00th=[ 742], 60.00th=[41157], 00:34:33.622 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:33.622 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:34:33.622 | 99.99th=[43779] 00:34:33.622 bw ( KiB/s): min= 704, max= 896, per=49.13%, avg=793.60, stdev=52.53, samples=20 00:34:33.622 iops : min= 176, max= 224, avg=198.40, stdev=13.13, samples=20 00:34:33.622 lat (usec) : 750=50.60%, 1000=1.51% 00:34:33.622 lat (msec) : 50=47.89% 00:34:33.622 cpu : usr=94.72%, sys=4.97%, ctx=14, majf=0, minf=218 00:34:33.622 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.622 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.622 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:33.622 filename1: (groupid=0, jobs=1): err= 0: pid=3350135: Mon Nov 25 13:33:31 2024 00:34:33.622 read: IOPS=205, BW=821KiB/s (841kB/s)(8240KiB/10032msec) 00:34:33.622 slat (nsec): min=6934, max=40316, avg=8796.61, stdev=2928.13 00:34:33.622 clat (usec): min=522, max=42436, avg=19452.02, stdev=20330.05 00:34:33.622 lat (usec): min=529, max=42448, avg=19460.81, stdev=20329.77 00:34:33.622 clat percentiles (usec): 00:34:33.622 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 586], 00:34:33.622 | 30.00th=[ 603], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[41157], 00:34:33.622 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:33.622 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:33.622 | 99.99th=[42206] 00:34:33.622 bw ( KiB/s): min= 704, max= 960, per=50.93%, avg=822.40, stdev=72.00, samples=20 00:34:33.622 iops : min= 176, max= 240, avg=205.60, stdev=18.00, samples=20 00:34:33.622 lat (usec) : 750=53.01%, 1000=0.58% 00:34:33.622 lat (msec) : 4=0.19%, 50=46.21% 00:34:33.622 cpu : usr=94.97%, sys=4.69%, ctx=15, majf=0, minf=50 00:34:33.622 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.622 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.622 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:33.622 00:34:33.622 Run status group 0 (all jobs): 00:34:33.622 READ: bw=1614KiB/s (1653kB/s), 793KiB/s-821KiB/s (812kB/s-841kB/s), io=15.8MiB (16.6MB), run=10026-10032msec 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.188 00:34:34.188 real 0m11.636s 00:34:34.188 user 0m20.609s 00:34:34.188 sys 0m1.251s 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 ************************************ 00:34:34.188 END TEST fio_dif_1_multi_subsystems 00:34:34.188 ************************************ 00:34:34.188 13:33:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:34.188 13:33:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:34.188 13:33:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 ************************************ 00:34:34.188 START TEST fio_dif_rand_params 00:34:34.188 ************************************ 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 bdev_null0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:34.188 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:34.189 [2024-11-25 13:33:31.715941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:34.189 { 00:34:34.189 "params": { 00:34:34.189 "name": "Nvme$subsystem", 00:34:34.189 "trtype": "$TEST_TRANSPORT", 00:34:34.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:34.189 "adrfam": "ipv4", 00:34:34.189 "trsvcid": "$NVMF_PORT", 00:34:34.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:34.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:34.189 "hdgst": ${hdgst:-false}, 00:34:34.189 "ddgst": ${ddgst:-false} 00:34:34.189 }, 00:34:34.189 "method": "bdev_nvme_attach_controller" 00:34:34.189 } 00:34:34.189 EOF 00:34:34.189 )") 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:34.189 "params": { 00:34:34.189 "name": "Nvme0", 00:34:34.189 "trtype": "tcp", 00:34:34.189 "traddr": "10.0.0.2", 00:34:34.189 "adrfam": "ipv4", 00:34:34.189 "trsvcid": "4420", 00:34:34.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:34.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:34.189 "hdgst": false, 00:34:34.189 "ddgst": false 00:34:34.189 }, 00:34:34.189 "method": "bdev_nvme_attach_controller" 00:34:34.189 }' 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:34.189 13:33:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:34.447 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:34.447 ... 00:34:34.447 fio-3.35 00:34:34.447 Starting 3 threads 00:34:41.002 00:34:41.002 filename0: (groupid=0, jobs=1): err= 0: pid=3351536: Mon Nov 25 13:33:37 2024 00:34:41.002 read: IOPS=233, BW=29.2MiB/s (30.6MB/s)(146MiB/5006msec) 00:34:41.002 slat (nsec): min=6806, max=89516, avg=15569.71, stdev=5479.28 00:34:41.002 clat (usec): min=4065, max=53333, avg=12826.33, stdev=5081.24 00:34:41.002 lat (usec): min=4079, max=53346, avg=12841.90, stdev=5080.97 00:34:41.002 clat percentiles (usec): 00:34:41.002 | 1.00th=[ 5080], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10945], 00:34:41.002 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:34:41.002 | 70.00th=[13304], 80.00th=[14091], 90.00th=[15008], 95.00th=[15926], 00:34:41.002 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:34:41.002 | 99.99th=[53216] 00:34:41.002 bw ( KiB/s): min=21504, max=35584, per=34.22%, avg=29849.60, stdev=3969.97, samples=10 00:34:41.002 iops : min= 168, max= 278, avg=233.20, stdev=31.02, samples=10 00:34:41.002 lat (msec) : 10=7.36%, 20=91.10%, 50=0.60%, 100=0.94% 00:34:41.002 cpu : usr=92.67%, sys=6.81%, ctx=10, majf=0, minf=162 00:34:41.002 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.002 issued rwts: total=1169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.002 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:41.002 filename0: (groupid=0, jobs=1): err= 0: pid=3351537: Mon Nov 25 13:33:37 2024 00:34:41.002 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(138MiB/5044msec) 00:34:41.002 slat (nsec): min=5257, max=42457, avg=14189.57, stdev=3949.87 00:34:41.002 clat (usec): min=4998, max=54122, avg=13686.20, stdev=4852.65 00:34:41.002 lat (usec): min=5005, max=54135, avg=13700.39, stdev=4852.55 00:34:41.002 clat percentiles (usec): 00:34:41.002 | 1.00th=[ 5669], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11207], 00:34:41.002 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13435], 60.00th=[14222], 00:34:41.002 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16319], 95.00th=[16909], 00:34:41.002 | 99.00th=[46924], 99.50th=[51643], 99.90th=[53740], 99.95th=[54264], 00:34:41.002 | 99.99th=[54264] 00:34:41.002 bw ( KiB/s): min=24320, max=33024, per=32.26%, avg=28134.40, stdev=2922.46, samples=10 00:34:41.002 iops : min= 190, max= 258, avg=219.80, stdev=22.83, samples=10 00:34:41.002 lat (msec) : 10=6.99%, 20=91.73%, 50=0.36%, 100=0.91% 00:34:41.002 cpu : usr=93.12%, sys=6.35%, ctx=17, majf=0, minf=132 00:34:41.002 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.002 issued rwts: total=1101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.002 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:41.002 filename0: (groupid=0, jobs=1): err= 0: pid=3351538: Mon Nov 25 13:33:37 2024 00:34:41.002 read: IOPS=233, BW=29.1MiB/s (30.6MB/s)(146MiB/5005msec) 00:34:41.002 slat (nsec): min=4903, max=41058, avg=14336.25, stdev=3829.64 00:34:41.002 clat (usec): min=4618, max=53809, avg=12847.23, stdev=4628.26 00:34:41.002 lat (usec): min=4630, max=53837, avg=12861.56, stdev=4628.32 00:34:41.002 clat percentiles (usec): 00:34:41.002 | 1.00th=[ 5407], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11076], 00:34:41.002 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:34:41.002 | 70.00th=[13304], 80.00th=[14222], 90.00th=[15008], 95.00th=[15664], 00:34:41.002 | 99.00th=[47449], 99.50th=[52167], 99.90th=[53216], 99.95th=[53740], 00:34:41.002 | 99.99th=[53740] 00:34:41.002 bw ( KiB/s): min=26880, max=33280, per=34.16%, avg=29798.40, stdev=1870.34, samples=10 00:34:41.002 iops : min= 210, max= 260, avg=232.80, stdev=14.61, samples=10 00:34:41.002 lat (msec) : 10=5.91%, 20=92.80%, 50=0.77%, 100=0.51% 00:34:41.002 cpu : usr=92.71%, sys=6.75%, ctx=19, majf=0, minf=106 00:34:41.002 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.002 issued rwts: total=1167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.002 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:41.002 00:34:41.002 Run status group 0 (all jobs): 00:34:41.002 READ: bw=85.2MiB/s (89.3MB/s), 27.3MiB/s-29.2MiB/s (28.6MB/s-30.6MB/s), io=430MiB (450MB), run=5005-5044msec 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:41.002 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 bdev_null0 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 [2024-11-25 13:33:38.012668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 bdev_null1 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 bdev_null2 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:41.003 { 00:34:41.003 "params": { 00:34:41.003 "name": "Nvme$subsystem", 00:34:41.003 "trtype": "$TEST_TRANSPORT", 00:34:41.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.003 "adrfam": "ipv4", 00:34:41.003 "trsvcid": "$NVMF_PORT", 00:34:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.003 "hdgst": ${hdgst:-false}, 00:34:41.003 "ddgst": ${ddgst:-false} 00:34:41.003 }, 00:34:41.003 "method": "bdev_nvme_attach_controller" 00:34:41.003 } 00:34:41.003 EOF 00:34:41.003 )") 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:41.003 { 00:34:41.003 "params": { 00:34:41.003 "name": "Nvme$subsystem", 00:34:41.003 "trtype": "$TEST_TRANSPORT", 00:34:41.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.003 "adrfam": "ipv4", 00:34:41.003 "trsvcid": "$NVMF_PORT", 00:34:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.003 "hdgst": ${hdgst:-false}, 00:34:41.003 "ddgst": ${ddgst:-false} 00:34:41.003 }, 00:34:41.003 "method": "bdev_nvme_attach_controller" 00:34:41.003 } 00:34:41.003 EOF 00:34:41.003 )") 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:41.003 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:41.003 { 00:34:41.003 "params": { 00:34:41.003 "name": "Nvme$subsystem", 00:34:41.003 "trtype": "$TEST_TRANSPORT", 00:34:41.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.003 "adrfam": "ipv4", 00:34:41.003 "trsvcid": "$NVMF_PORT", 00:34:41.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.003 "hdgst": ${hdgst:-false}, 00:34:41.003 "ddgst": ${ddgst:-false} 00:34:41.003 }, 00:34:41.003 "method": "bdev_nvme_attach_controller" 00:34:41.003 } 00:34:41.003 EOF 00:34:41.004 )") 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:41.004 "params": { 00:34:41.004 "name": "Nvme0", 00:34:41.004 "trtype": "tcp", 00:34:41.004 "traddr": "10.0.0.2", 00:34:41.004 "adrfam": "ipv4", 00:34:41.004 "trsvcid": "4420", 00:34:41.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.004 "hdgst": false, 00:34:41.004 "ddgst": false 00:34:41.004 }, 00:34:41.004 "method": "bdev_nvme_attach_controller" 00:34:41.004 },{ 00:34:41.004 "params": { 00:34:41.004 "name": "Nvme1", 00:34:41.004 "trtype": "tcp", 00:34:41.004 "traddr": "10.0.0.2", 00:34:41.004 "adrfam": "ipv4", 00:34:41.004 "trsvcid": "4420", 00:34:41.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.004 "hdgst": false, 00:34:41.004 "ddgst": false 00:34:41.004 }, 00:34:41.004 "method": "bdev_nvme_attach_controller" 00:34:41.004 },{ 00:34:41.004 "params": { 00:34:41.004 "name": "Nvme2", 00:34:41.004 "trtype": "tcp", 00:34:41.004 "traddr": "10.0.0.2", 00:34:41.004 "adrfam": "ipv4", 00:34:41.004 "trsvcid": "4420", 00:34:41.004 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:41.004 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:41.004 "hdgst": false, 00:34:41.004 "ddgst": false 00:34:41.004 }, 00:34:41.004 "method": "bdev_nvme_attach_controller" 00:34:41.004 }' 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.004 13:33:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.004 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:41.004 ... 00:34:41.004 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:41.004 ... 00:34:41.004 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:41.004 ... 00:34:41.004 fio-3.35 00:34:41.004 Starting 24 threads 00:34:53.204 00:34:53.204 filename0: (groupid=0, jobs=1): err= 0: pid=3352395: Mon Nov 25 13:33:49 2024 00:34:53.204 read: IOPS=76, BW=305KiB/s (313kB/s)(3072KiB/10064msec) 00:34:53.204 slat (usec): min=8, max=110, avg=22.37, stdev=20.57 00:34:53.204 clat (msec): min=106, max=319, avg=208.49, stdev=38.89 00:34:53.204 lat (msec): min=106, max=319, avg=208.52, stdev=38.90 00:34:53.204 clat percentiles (msec): 00:34:53.204 | 1.00th=[ 107], 5.00th=[ 133], 10.00th=[ 167], 20.00th=[ 188], 00:34:53.204 | 30.00th=[ 192], 40.00th=[ 201], 50.00th=[ 207], 60.00th=[ 213], 00:34:53.204 | 70.00th=[ 218], 80.00th=[ 234], 90.00th=[ 268], 95.00th=[ 279], 00:34:53.204 | 99.00th=[ 309], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 321], 00:34:53.204 | 99.99th=[ 321] 00:34:53.204 bw ( KiB/s): min= 256, max= 384, per=4.89%, avg=300.80, stdev=54.59, samples=20 00:34:53.204 iops : min= 64, max= 96, avg=75.20, stdev=13.65, samples=20 00:34:53.204 lat (msec) : 250=87.24%, 500=12.76% 00:34:53.204 cpu : usr=98.01%, sys=1.45%, ctx=94, majf=0, minf=9 00:34:53.204 IO depths : 1=0.9%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:34:53.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.204 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.204 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.204 filename0: (groupid=0, jobs=1): err= 0: pid=3352396: Mon Nov 25 13:33:49 2024 00:34:53.204 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10065msec) 00:34:53.204 slat (usec): min=10, max=107, avg=63.01, stdev=19.66 00:34:53.204 clat (msec): min=106, max=463, avg=270.92, stdev=59.70 00:34:53.204 lat (msec): min=106, max=463, avg=270.98, stdev=59.71 00:34:53.204 clat percentiles (msec): 00:34:53.204 | 1.00th=[ 107], 5.00th=[ 125], 10.00th=[ 188], 20.00th=[ 239], 00:34:53.204 | 30.00th=[ 268], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 300], 00:34:53.204 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 330], 95.00th=[ 334], 00:34:53.204 | 99.00th=[ 426], 99.50th=[ 447], 99.90th=[ 464], 99.95th=[ 464], 00:34:53.204 | 99.99th=[ 464] 00:34:53.204 bw ( KiB/s): min= 128, max= 384, per=3.75%, avg=230.40, stdev=76.36, samples=20 00:34:53.204 iops : min= 32, max= 96, avg=57.60, stdev=19.09, samples=20 00:34:53.204 lat (msec) : 250=23.65%, 500=76.35% 00:34:53.204 cpu : usr=98.33%, sys=1.20%, ctx=26, majf=0, minf=9 00:34:53.204 IO depths : 1=4.1%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:34:53.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.204 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.204 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.204 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.204 filename0: (groupid=0, jobs=1): err= 0: pid=3352397: Mon Nov 25 13:33:49 2024 00:34:53.204 read: IOPS=77, BW=308KiB/s (316kB/s)(3104KiB/10065msec) 00:34:53.204 slat (usec): min=7, max=100, avg=18.73, stdev=16.81 00:34:53.204 clat (msec): min=107, max=321, avg=206.93, stdev=35.16 00:34:53.204 lat (msec): min=107, max=321, avg=206.95, stdev=35.16 00:34:53.204 clat percentiles (msec): 00:34:53.204 | 1.00th=[ 108], 5.00th=[ 165], 10.00th=[ 182], 20.00th=[ 186], 00:34:53.204 | 30.00th=[ 190], 40.00th=[ 201], 50.00th=[ 205], 60.00th=[ 207], 00:34:53.204 | 70.00th=[ 213], 80.00th=[ 230], 90.00th=[ 255], 95.00th=[ 279], 00:34:53.204 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:34:53.205 | 99.99th=[ 321] 00:34:53.205 bw ( KiB/s): min= 256, max= 384, per=4.94%, avg=304.00, stdev=53.19, samples=20 00:34:53.205 iops : min= 64, max= 96, avg=76.00, stdev=13.30, samples=20 00:34:53.205 lat (msec) : 250=88.14%, 500=11.86% 00:34:53.205 cpu : usr=98.49%, sys=1.10%, ctx=16, majf=0, minf=9 00:34:53.205 IO depths : 1=2.3%, 2=6.1%, 4=17.3%, 8=64.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:34:53.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 issued rwts: total=776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.205 filename0: (groupid=0, jobs=1): err= 0: pid=3352398: Mon Nov 25 13:33:49 2024 00:34:53.205 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10075msec) 00:34:53.205 slat (usec): min=11, max=106, avg=26.55, stdev=15.19 00:34:53.205 clat (msec): min=85, max=418, avg=287.63, stdev=54.42 00:34:53.205 lat (msec): min=85, max=418, avg=287.65, stdev=54.42 00:34:53.205 clat percentiles (msec): 00:34:53.205 | 1.00th=[ 86], 5.00th=[ 190], 10.00th=[ 220], 20.00th=[ 271], 00:34:53.205 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 305], 00:34:53.205 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 351], 00:34:53.205 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 418], 99.95th=[ 418], 00:34:53.205 | 99.99th=[ 418] 00:34:53.205 bw ( KiB/s): min= 128, max= 256, per=3.54%, avg=217.60, stdev=58.59, samples=20 00:34:53.205 iops : min= 32, max= 64, avg=54.40, stdev=14.65, samples=20 00:34:53.205 lat (msec) : 100=2.86%, 250=13.21%, 500=83.93% 00:34:53.205 cpu : usr=98.43%, sys=1.10%, ctx=33, majf=0, minf=9 00:34:53.205 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:34:53.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.205 filename0: (groupid=0, jobs=1): err= 0: pid=3352399: Mon Nov 25 13:33:49 2024 00:34:53.205 read: IOPS=55, BW=223KiB/s (228kB/s)(2240KiB/10043msec) 00:34:53.205 slat (nsec): min=8089, max=87902, avg=26139.51, stdev=12487.28 00:34:53.205 clat (msec): min=188, max=397, avg=286.72, stdev=38.58 00:34:53.205 lat (msec): min=188, max=397, avg=286.75, stdev=38.58 00:34:53.205 clat percentiles (msec): 00:34:53.205 | 1.00th=[ 188], 5.00th=[ 215], 10.00th=[ 218], 20.00th=[ 262], 00:34:53.205 | 30.00th=[ 279], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 300], 00:34:53.205 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 334], 95.00th=[ 338], 00:34:53.205 | 99.00th=[ 342], 99.50th=[ 380], 99.90th=[ 397], 99.95th=[ 397], 00:34:53.205 | 99.99th=[ 397] 00:34:53.205 bw ( KiB/s): min= 128, max= 368, per=3.54%, avg=217.60, stdev=67.56, samples=20 00:34:53.205 iops : min= 32, max= 92, avg=54.40, stdev=16.89, samples=20 00:34:53.205 lat (msec) : 250=15.00%, 500=85.00% 00:34:53.205 cpu : usr=98.18%, sys=1.22%, ctx=85, majf=0, minf=9 00:34:53.205 IO depths : 1=2.3%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:34:53.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.205 filename0: (groupid=0, jobs=1): err= 0: pid=3352400: Mon Nov 25 13:33:49 2024 00:34:53.205 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10080msec) 00:34:53.205 slat (usec): min=5, max=119, avg=73.44, stdev=16.89 00:34:53.205 clat (msec): min=140, max=438, avg=286.70, stdev=47.49 00:34:53.205 lat (msec): min=140, max=438, avg=286.77, stdev=47.49 00:34:53.205 clat percentiles (msec): 00:34:53.205 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 211], 20.00th=[ 268], 00:34:53.205 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 296], 00:34:53.205 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 338], 00:34:53.205 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 439], 99.95th=[ 439], 00:34:53.205 | 99.99th=[ 439] 00:34:53.205 bw ( KiB/s): min= 128, max= 256, per=3.54%, avg=217.60, stdev=56.96, samples=20 00:34:53.205 iops : min= 32, max= 64, avg=54.40, stdev=14.24, samples=20 00:34:53.205 lat (msec) : 250=16.07%, 500=83.93% 00:34:53.205 cpu : usr=98.00%, sys=1.46%, ctx=79, majf=0, minf=9 00:34:53.205 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:34:53.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.205 filename0: (groupid=0, jobs=1): err= 0: pid=3352401: Mon Nov 25 13:33:49 2024 00:34:53.205 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10085msec) 00:34:53.205 slat (usec): min=10, max=139, avg=68.68, stdev=16.17 00:34:53.205 clat (msec): min=110, max=439, avg=278.90, stdev=57.72 00:34:53.205 lat (msec): min=110, max=439, avg=278.97, stdev=57.73 00:34:53.205 clat percentiles (msec): 00:34:53.205 | 1.00th=[ 111], 5.00th=[ 182], 10.00th=[ 194], 20.00th=[ 222], 00:34:53.205 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 296], 00:34:53.205 | 70.00th=[ 300], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 338], 00:34:53.205 | 99.00th=[ 426], 99.50th=[ 435], 99.90th=[ 439], 99.95th=[ 439], 00:34:53.205 | 99.99th=[ 439] 00:34:53.205 bw ( KiB/s): min= 128, max= 384, per=3.63%, avg=224.00, stdev=67.68, samples=20 00:34:53.205 iops : min= 32, max= 96, avg=56.00, stdev=16.92, samples=20 00:34:53.205 lat (msec) : 250=21.18%, 500=78.82% 00:34:53.205 cpu : usr=97.97%, sys=1.46%, ctx=25, majf=0, minf=9 00:34:53.205 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:53.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.205 filename0: (groupid=0, jobs=1): err= 0: pid=3352402: Mon Nov 25 13:33:49 2024 00:34:53.205 read: IOPS=55, BW=223KiB/s (228kB/s)(2240KiB/10042msec) 00:34:53.205 slat (nsec): min=8626, max=87075, avg=32568.75, stdev=13844.62 00:34:53.205 clat (msec): min=134, max=438, avg=286.64, stdev=49.86 00:34:53.205 lat (msec): min=134, max=438, avg=286.67, stdev=49.85 00:34:53.205 clat percentiles (msec): 00:34:53.205 | 1.00th=[ 157], 5.00th=[ 190], 10.00th=[ 213], 20.00th=[ 255], 00:34:53.205 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 300], 00:34:53.205 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 342], 00:34:53.205 | 99.00th=[ 418], 99.50th=[ 422], 99.90th=[ 439], 99.95th=[ 439], 00:34:53.205 | 99.99th=[ 439] 00:34:53.205 bw ( KiB/s): min= 128, max= 384, per=3.54%, avg=217.60, stdev=70.49, samples=20 00:34:53.205 iops : min= 32, max= 96, avg=54.40, stdev=17.62, samples=20 00:34:53.205 lat (msec) : 250=18.57%, 500=81.43% 00:34:53.205 cpu : usr=98.45%, sys=1.15%, ctx=18, majf=0, minf=9 00:34:53.205 IO depths : 1=3.2%, 2=9.3%, 4=24.5%, 8=53.8%, 16=9.3%, 32=0.0%, >=64=0.0% 00:34:53.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.205 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.205 filename1: (groupid=0, jobs=1): err= 0: pid=3352403: Mon Nov 25 13:33:49 2024 00:34:53.205 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10065msec) 00:34:53.205 slat (usec): min=6, max=114, avg=66.02, stdev=18.51 00:34:53.205 clat (msec): min=107, max=423, avg=277.72, stdev=57.71 00:34:53.205 lat (msec): min=107, max=423, avg=277.78, stdev=57.72 00:34:53.205 clat percentiles (msec): 00:34:53.205 | 1.00th=[ 108], 5.00th=[ 125], 10.00th=[ 190], 20.00th=[ 239], 00:34:53.205 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 300], 00:34:53.205 | 70.00th=[ 305], 80.00th=[ 317], 90.00th=[ 338], 95.00th=[ 347], 00:34:53.205 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 422], 99.95th=[ 422], 00:34:53.205 | 99.99th=[ 422] 00:34:53.205 bw ( KiB/s): min= 128, max= 384, per=3.63%, avg=224.00, stdev=66.28, samples=20 00:34:53.205 iops : min= 32, max= 96, avg=56.00, stdev=16.57, samples=20 00:34:53.205 lat (msec) : 250=21.88%, 500=78.12% 00:34:53.205 cpu : usr=98.18%, sys=1.31%, ctx=23, majf=0, minf=9 00:34:53.206 IO depths : 1=1.7%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:34:53.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.206 filename1: (groupid=0, jobs=1): err= 0: pid=3352404: Mon Nov 25 13:33:49 2024 00:34:53.206 read: IOPS=57, BW=229KiB/s (234kB/s)(2304KiB/10069msec) 00:34:53.206 slat (usec): min=8, max=111, avg=66.44, stdev=23.38 00:34:53.206 clat (msec): min=123, max=463, avg=278.46, stdev=53.70 00:34:53.206 lat (msec): min=123, max=463, avg=278.53, stdev=53.71 00:34:53.206 clat percentiles (msec): 00:34:53.206 | 1.00th=[ 157], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 236], 00:34:53.206 | 30.00th=[ 268], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 300], 00:34:53.206 | 70.00th=[ 309], 80.00th=[ 321], 90.00th=[ 334], 95.00th=[ 342], 00:34:53.206 | 99.00th=[ 414], 99.50th=[ 460], 99.90th=[ 464], 99.95th=[ 464], 00:34:53.206 | 99.99th=[ 464] 00:34:53.206 bw ( KiB/s): min= 128, max= 368, per=3.63%, avg=224.00, stdev=69.26, samples=20 00:34:53.206 iops : min= 32, max= 92, avg=56.00, stdev=17.31, samples=20 00:34:53.206 lat (msec) : 250=24.65%, 500=75.35% 00:34:53.206 cpu : usr=98.42%, sys=1.04%, ctx=48, majf=0, minf=9 00:34:53.206 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:34:53.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.206 filename1: (groupid=0, jobs=1): err= 0: pid=3352405: Mon Nov 25 13:33:49 2024 00:34:53.206 read: IOPS=77, BW=312KiB/s (319kB/s)(3136KiB/10064msec) 00:34:53.206 slat (nsec): min=7779, max=72409, avg=18702.38, stdev=12848.14 00:34:53.206 clat (msec): min=118, max=322, avg=204.26, stdev=29.67 00:34:53.206 lat (msec): min=118, max=322, avg=204.28, stdev=29.68 00:34:53.206 clat percentiles (msec): 00:34:53.206 | 1.00th=[ 120], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 188], 00:34:53.206 | 30.00th=[ 192], 40.00th=[ 199], 50.00th=[ 205], 60.00th=[ 207], 00:34:53.206 | 70.00th=[ 213], 80.00th=[ 222], 90.00th=[ 239], 95.00th=[ 255], 00:34:53.206 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 321], 99.95th=[ 321], 00:34:53.206 | 99.99th=[ 321] 00:34:53.206 bw ( KiB/s): min= 256, max= 384, per=5.00%, avg=307.20, stdev=56.53, samples=20 00:34:53.206 iops : min= 64, max= 96, avg=76.80, stdev=14.13, samples=20 00:34:53.206 lat (msec) : 250=93.62%, 500=6.38% 00:34:53.206 cpu : usr=98.26%, sys=1.34%, ctx=18, majf=0, minf=9 00:34:53.206 IO depths : 1=1.7%, 2=7.9%, 4=25.0%, 8=54.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:34:53.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.206 filename1: (groupid=0, jobs=1): err= 0: pid=3352406: Mon Nov 25 13:33:49 2024 00:34:53.206 read: IOPS=78, BW=314KiB/s (321kB/s)(3168KiB/10093msec) 00:34:53.206 slat (nsec): min=7917, max=92080, avg=21294.03, stdev=18989.55 00:34:53.206 clat (msec): min=106, max=294, avg=203.16, stdev=33.76 00:34:53.206 lat (msec): min=106, max=294, avg=203.18, stdev=33.76 00:34:53.206 clat percentiles (msec): 00:34:53.206 | 1.00th=[ 107], 5.00th=[ 133], 10.00th=[ 180], 20.00th=[ 186], 00:34:53.206 | 30.00th=[ 190], 40.00th=[ 201], 50.00th=[ 207], 60.00th=[ 209], 00:34:53.206 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 239], 95.00th=[ 275], 00:34:53.206 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 296], 99.95th=[ 296], 00:34:53.206 | 99.99th=[ 296] 00:34:53.206 bw ( KiB/s): min= 240, max= 384, per=5.05%, avg=310.40, stdev=43.25, samples=20 00:34:53.206 iops : min= 60, max= 96, avg=77.60, stdev=10.81, samples=20 00:34:53.206 lat (msec) : 250=92.17%, 500=7.83% 00:34:53.206 cpu : usr=97.92%, sys=1.53%, ctx=53, majf=0, minf=9 00:34:53.206 IO depths : 1=1.1%, 2=3.5%, 4=13.3%, 8=70.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:34:53.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 complete : 0=0.0%, 4=90.7%, 8=3.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 issued rwts: total=792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.206 filename1: (groupid=0, jobs=1): err= 0: pid=3352407: Mon Nov 25 13:33:49 2024 00:34:53.206 read: IOPS=62, BW=248KiB/s (254kB/s)(2496KiB/10056msec) 00:34:53.206 slat (nsec): min=3764, max=79806, avg=21479.24, stdev=8997.25 00:34:53.206 clat (msec): min=176, max=407, avg=257.66, stdev=48.40 00:34:53.206 lat (msec): min=176, max=407, avg=257.68, stdev=48.40 00:34:53.206 clat percentiles (msec): 00:34:53.206 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 205], 00:34:53.206 | 30.00th=[ 224], 40.00th=[ 239], 50.00th=[ 275], 60.00th=[ 279], 00:34:53.206 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 326], 95.00th=[ 334], 00:34:53.206 | 99.00th=[ 338], 99.50th=[ 384], 99.90th=[ 409], 99.95th=[ 409], 00:34:53.206 | 99.99th=[ 409] 00:34:53.206 bw ( KiB/s): min= 128, max= 384, per=3.96%, avg=243.20, stdev=69.37, samples=20 00:34:53.206 iops : min= 32, max= 96, avg=60.80, stdev=17.34, samples=20 00:34:53.206 lat (msec) : 250=44.55%, 500=55.45% 00:34:53.206 cpu : usr=98.20%, sys=1.44%, ctx=20, majf=0, minf=9 00:34:53.206 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:34:53.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.206 filename1: (groupid=0, jobs=1): err= 0: pid=3352408: Mon Nov 25 13:33:49 2024 00:34:53.206 read: IOPS=75, BW=304KiB/s (311kB/s)(3064KiB/10083msec) 00:34:53.206 slat (nsec): min=7911, max=86618, avg=17883.74, stdev=15161.51 00:34:53.206 clat (msec): min=133, max=354, avg=209.91, stdev=33.00 00:34:53.206 lat (msec): min=133, max=354, avg=209.93, stdev=33.00 00:34:53.206 clat percentiles (msec): 00:34:53.206 | 1.00th=[ 153], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:34:53.206 | 30.00th=[ 194], 40.00th=[ 203], 50.00th=[ 205], 60.00th=[ 207], 00:34:53.206 | 70.00th=[ 211], 80.00th=[ 222], 90.00th=[ 259], 95.00th=[ 279], 00:34:53.206 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 355], 00:34:53.206 | 99.99th=[ 355] 00:34:53.206 bw ( KiB/s): min= 240, max= 384, per=4.89%, avg=300.00, stdev=48.11, samples=20 00:34:53.206 iops : min= 60, max= 96, avg=75.00, stdev=12.03, samples=20 00:34:53.206 lat (msec) : 250=86.95%, 500=13.05% 00:34:53.206 cpu : usr=97.93%, sys=1.47%, ctx=49, majf=0, minf=9 00:34:53.206 IO depths : 1=1.6%, 2=4.4%, 4=14.6%, 8=68.3%, 16=11.1%, 32=0.0%, >=64=0.0% 00:34:53.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.206 issued rwts: total=766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.206 filename1: (groupid=0, jobs=1): err= 0: pid=3352409: Mon Nov 25 13:33:49 2024 00:34:53.206 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10073msec) 00:34:53.206 slat (usec): min=15, max=103, avg=25.86, stdev=11.98 00:34:53.206 clat (msec): min=85, max=425, avg=287.59, stdev=58.85 00:34:53.206 lat (msec): min=85, max=425, avg=287.61, stdev=58.84 00:34:53.206 clat percentiles (msec): 00:34:53.206 | 1.00th=[ 86], 5.00th=[ 190], 10.00th=[ 213], 20.00th=[ 271], 00:34:53.206 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 300], 00:34:53.207 | 70.00th=[ 313], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 388], 00:34:53.207 | 99.00th=[ 405], 99.50th=[ 418], 99.90th=[ 426], 99.95th=[ 426], 00:34:53.207 | 99.99th=[ 426] 00:34:53.207 bw ( KiB/s): min= 128, max= 256, per=3.54%, avg=217.60, stdev=56.96, samples=20 00:34:53.207 iops : min= 32, max= 64, avg=54.40, stdev=14.24, samples=20 00:34:53.207 lat (msec) : 100=2.86%, 250=15.71%, 500=81.43% 00:34:53.207 cpu : usr=97.92%, sys=1.48%, ctx=50, majf=0, minf=9 00:34:53.207 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:34:53.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.207 filename1: (groupid=0, jobs=1): err= 0: pid=3352410: Mon Nov 25 13:33:49 2024 00:34:53.207 read: IOPS=81, BW=324KiB/s (332kB/s)(3272KiB/10093msec) 00:34:53.207 slat (nsec): min=7929, max=98519, avg=13380.98, stdev=11405.15 00:34:53.207 clat (msec): min=107, max=317, avg=197.12, stdev=39.73 00:34:53.207 lat (msec): min=107, max=317, avg=197.13, stdev=39.73 00:34:53.207 clat percentiles (msec): 00:34:53.207 | 1.00th=[ 108], 5.00th=[ 126], 10.00th=[ 140], 20.00th=[ 163], 00:34:53.207 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 201], 60.00th=[ 207], 00:34:53.207 | 70.00th=[ 211], 80.00th=[ 222], 90.00th=[ 241], 95.00th=[ 271], 00:34:53.207 | 99.00th=[ 288], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:34:53.207 | 99.99th=[ 317] 00:34:53.207 bw ( KiB/s): min= 256, max= 384, per=5.21%, avg=320.80, stdev=45.99, samples=20 00:34:53.207 iops : min= 64, max= 96, avg=80.20, stdev=11.50, samples=20 00:34:53.207 lat (msec) : 250=93.15%, 500=6.85% 00:34:53.207 cpu : usr=98.37%, sys=1.24%, ctx=28, majf=0, minf=9 00:34:53.207 IO depths : 1=0.5%, 2=1.3%, 4=8.3%, 8=77.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:53.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 complete : 0=0.0%, 4=89.2%, 8=5.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 issued rwts: total=818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.207 filename2: (groupid=0, jobs=1): err= 0: pid=3352411: Mon Nov 25 13:33:49 2024 00:34:53.207 read: IOPS=58, BW=234KiB/s (240kB/s)(2360KiB/10080msec) 00:34:53.207 slat (usec): min=5, max=111, avg=62.73, stdev=27.06 00:34:53.207 clat (msec): min=83, max=437, avg=272.71, stdev=55.43 00:34:53.207 lat (msec): min=83, max=437, avg=272.77, stdev=55.45 00:34:53.207 clat percentiles (msec): 00:34:53.207 | 1.00th=[ 84], 5.00th=[ 188], 10.00th=[ 203], 20.00th=[ 222], 00:34:53.207 | 30.00th=[ 268], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 288], 00:34:53.207 | 70.00th=[ 296], 80.00th=[ 313], 90.00th=[ 334], 95.00th=[ 338], 00:34:53.207 | 99.00th=[ 401], 99.50th=[ 426], 99.90th=[ 439], 99.95th=[ 439], 00:34:53.207 | 99.99th=[ 439] 00:34:53.207 bw ( KiB/s): min= 128, max= 368, per=3.73%, avg=229.60, stdev=60.82, samples=20 00:34:53.207 iops : min= 32, max= 92, avg=57.40, stdev=15.21, samples=20 00:34:53.207 lat (msec) : 100=2.37%, 250=24.41%, 500=73.22% 00:34:53.207 cpu : usr=97.61%, sys=1.48%, ctx=208, majf=0, minf=10 00:34:53.207 IO depths : 1=1.9%, 2=8.1%, 4=25.1%, 8=54.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:34:53.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.207 filename2: (groupid=0, jobs=1): err= 0: pid=3352412: Mon Nov 25 13:33:49 2024 00:34:53.207 read: IOPS=59, BW=237KiB/s (243kB/s)(2392KiB/10093msec) 00:34:53.207 slat (usec): min=10, max=113, avg=70.62, stdev=17.07 00:34:53.207 clat (msec): min=106, max=422, avg=268.93, stdev=63.86 00:34:53.207 lat (msec): min=106, max=422, avg=269.00, stdev=63.87 00:34:53.207 clat percentiles (msec): 00:34:53.207 | 1.00th=[ 108], 5.00th=[ 125], 10.00th=[ 188], 20.00th=[ 213], 00:34:53.207 | 30.00th=[ 243], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 292], 00:34:53.207 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 338], 95.00th=[ 351], 00:34:53.207 | 99.00th=[ 405], 99.50th=[ 414], 99.90th=[ 422], 99.95th=[ 422], 00:34:53.207 | 99.99th=[ 422] 00:34:53.207 bw ( KiB/s): min= 128, max= 384, per=3.78%, avg=232.80, stdev=65.96, samples=20 00:34:53.207 iops : min= 32, max= 96, avg=58.20, stdev=16.49, samples=20 00:34:53.207 lat (msec) : 250=30.10%, 500=69.90% 00:34:53.207 cpu : usr=97.92%, sys=1.44%, ctx=276, majf=0, minf=9 00:34:53.207 IO depths : 1=3.0%, 2=8.9%, 4=23.7%, 8=54.8%, 16=9.5%, 32=0.0%, >=64=0.0% 00:34:53.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.207 filename2: (groupid=0, jobs=1): err= 0: pid=3352413: Mon Nov 25 13:33:49 2024 00:34:53.207 read: IOPS=55, BW=223KiB/s (228kB/s)(2240KiB/10042msec) 00:34:53.207 slat (nsec): min=9096, max=60049, avg=27697.53, stdev=9297.77 00:34:53.207 clat (msec): min=170, max=438, avg=286.68, stdev=45.25 00:34:53.207 lat (msec): min=170, max=438, avg=286.70, stdev=45.25 00:34:53.207 clat percentiles (msec): 00:34:53.207 | 1.00th=[ 188], 5.00th=[ 197], 10.00th=[ 215], 20.00th=[ 255], 00:34:53.207 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 296], 60.00th=[ 300], 00:34:53.207 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 342], 00:34:53.207 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 439], 00:34:53.207 | 99.99th=[ 439] 00:34:53.207 bw ( KiB/s): min= 128, max= 384, per=3.54%, avg=217.60, stdev=70.49, samples=20 00:34:53.207 iops : min= 32, max= 96, avg=54.40, stdev=17.62, samples=20 00:34:53.207 lat (msec) : 250=17.14%, 500=82.86% 00:34:53.207 cpu : usr=98.37%, sys=1.23%, ctx=40, majf=0, minf=9 00:34:53.207 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:34:53.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.207 filename2: (groupid=0, jobs=1): err= 0: pid=3352414: Mon Nov 25 13:33:49 2024 00:34:53.207 read: IOPS=79, BW=316KiB/s (324kB/s)(3192KiB/10097msec) 00:34:53.207 slat (nsec): min=7308, max=55952, avg=19852.76, stdev=5253.72 00:34:53.207 clat (msec): min=74, max=331, avg=201.94, stdev=37.30 00:34:53.207 lat (msec): min=74, max=331, avg=201.96, stdev=37.30 00:34:53.207 clat percentiles (msec): 00:34:53.207 | 1.00th=[ 75], 5.00th=[ 140], 10.00th=[ 180], 20.00th=[ 184], 00:34:53.207 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 203], 60.00th=[ 207], 00:34:53.207 | 70.00th=[ 213], 80.00th=[ 218], 90.00th=[ 239], 95.00th=[ 266], 00:34:53.207 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 330], 99.95th=[ 330], 00:34:53.207 | 99.99th=[ 330] 00:34:53.207 bw ( KiB/s): min= 240, max= 384, per=5.08%, avg=312.80, stdev=45.40, samples=20 00:34:53.207 iops : min= 60, max= 96, avg=78.20, stdev=11.35, samples=20 00:34:53.207 lat (msec) : 100=2.01%, 250=89.72%, 500=8.27% 00:34:53.207 cpu : usr=97.73%, sys=1.44%, ctx=47, majf=0, minf=9 00:34:53.207 IO depths : 1=1.3%, 2=2.8%, 4=10.5%, 8=74.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:34:53.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 complete : 0=0.0%, 4=89.9%, 8=4.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.207 issued rwts: total=798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.207 filename2: (groupid=0, jobs=1): err= 0: pid=3352415: Mon Nov 25 13:33:49 2024 00:34:53.207 read: IOPS=79, BW=318KiB/s (325kB/s)(3208KiB/10093msec) 00:34:53.207 slat (nsec): min=7916, max=89707, avg=16122.89, stdev=16625.51 00:34:53.207 clat (msec): min=118, max=314, avg=200.63, stdev=26.36 00:34:53.207 lat (msec): min=118, max=314, avg=200.65, stdev=26.37 00:34:53.207 clat percentiles (msec): 00:34:53.207 | 1.00th=[ 120], 5.00th=[ 157], 10.00th=[ 182], 20.00th=[ 184], 00:34:53.207 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 203], 60.00th=[ 207], 00:34:53.207 | 70.00th=[ 211], 80.00th=[ 213], 90.00th=[ 230], 95.00th=[ 232], 00:34:53.207 | 99.00th=[ 284], 99.50th=[ 296], 99.90th=[ 313], 99.95th=[ 313], 00:34:53.207 | 99.99th=[ 313] 00:34:53.208 bw ( KiB/s): min= 256, max= 384, per=5.12%, avg=314.40, stdev=45.34, samples=20 00:34:53.208 iops : min= 64, max= 96, avg=78.60, stdev=11.33, samples=20 00:34:53.208 lat (msec) : 250=97.01%, 500=2.99% 00:34:53.208 cpu : usr=98.11%, sys=1.31%, ctx=27, majf=0, minf=9 00:34:53.208 IO depths : 1=0.7%, 2=1.7%, 4=9.1%, 8=76.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:34:53.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.208 complete : 0=0.0%, 4=89.5%, 8=5.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.208 issued rwts: total=802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.208 filename2: (groupid=0, jobs=1): err= 0: pid=3352416: Mon Nov 25 13:33:49 2024 00:34:53.208 read: IOPS=57, BW=229KiB/s (235kB/s)(2304KiB/10058msec) 00:34:53.208 slat (usec): min=5, max=120, avg=70.14, stdev=20.65 00:34:53.208 clat (msec): min=110, max=437, avg=278.82, stdev=55.52 00:34:53.208 lat (msec): min=110, max=437, avg=278.89, stdev=55.53 00:34:53.208 clat percentiles (msec): 00:34:53.208 | 1.00th=[ 111], 5.00th=[ 182], 10.00th=[ 197], 20.00th=[ 224], 00:34:53.208 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 296], 00:34:53.208 | 70.00th=[ 300], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 338], 00:34:53.208 | 99.00th=[ 401], 99.50th=[ 422], 99.90th=[ 439], 99.95th=[ 439], 00:34:53.208 | 99.99th=[ 439] 00:34:53.208 bw ( KiB/s): min= 128, max= 368, per=3.63%, avg=224.00, stdev=66.48, samples=20 00:34:53.208 iops : min= 32, max= 92, avg=56.00, stdev=16.62, samples=20 00:34:53.208 lat (msec) : 250=20.14%, 500=79.86% 00:34:53.208 cpu : usr=98.18%, sys=1.32%, ctx=14, majf=0, minf=9 00:34:53.208 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:34:53.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.208 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.208 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.208 filename2: (groupid=0, jobs=1): err= 0: pid=3352417: Mon Nov 25 13:33:49 2024 00:34:53.208 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10074msec) 00:34:53.208 slat (usec): min=5, max=101, avg=67.12, stdev=14.79 00:34:53.208 clat (msec): min=83, max=466, avg=287.24, stdev=55.56 00:34:53.208 lat (msec): min=83, max=466, avg=287.31, stdev=55.56 00:34:53.208 clat percentiles (msec): 00:34:53.208 | 1.00th=[ 85], 5.00th=[ 190], 10.00th=[ 218], 20.00th=[ 271], 00:34:53.208 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 300], 00:34:53.208 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 351], 00:34:53.208 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 468], 99.95th=[ 468], 00:34:53.208 | 99.99th=[ 468] 00:34:53.208 bw ( KiB/s): min= 128, max= 256, per=3.54%, avg=217.60, stdev=60.18, samples=20 00:34:53.208 iops : min= 32, max= 64, avg=54.40, stdev=15.05, samples=20 00:34:53.208 lat (msec) : 100=2.86%, 250=13.21%, 500=83.93% 00:34:53.208 cpu : usr=98.11%, sys=1.38%, ctx=16, majf=0, minf=9 00:34:53.208 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:34:53.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.208 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.208 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.208 filename2: (groupid=0, jobs=1): err= 0: pid=3352418: Mon Nov 25 13:33:49 2024 00:34:53.208 read: IOPS=55, BW=222KiB/s (228kB/s)(2240KiB/10073msec) 00:34:53.208 slat (usec): min=24, max=105, avg=69.70, stdev=11.85 00:34:53.208 clat (msec): min=188, max=410, avg=287.08, stdev=39.80 00:34:53.208 lat (msec): min=188, max=410, avg=287.15, stdev=39.81 00:34:53.208 clat percentiles (msec): 00:34:53.208 | 1.00th=[ 188], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 262], 00:34:53.208 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 288], 60.00th=[ 305], 00:34:53.208 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 338], 00:34:53.208 | 99.00th=[ 342], 99.50th=[ 393], 99.90th=[ 409], 99.95th=[ 409], 00:34:53.208 | 99.99th=[ 409] 00:34:53.208 bw ( KiB/s): min= 128, max= 384, per=3.54%, avg=217.60, stdev=71.82, samples=20 00:34:53.208 iops : min= 32, max= 96, avg=54.40, stdev=17.95, samples=20 00:34:53.208 lat (msec) : 250=15.00%, 500=85.00% 00:34:53.208 cpu : usr=98.31%, sys=1.17%, ctx=40, majf=0, minf=9 00:34:53.208 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:53.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.208 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.208 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:53.208 00:34:53.208 Run status group 0 (all jobs): 00:34:53.208 READ: bw=6137KiB/s (6285kB/s), 222KiB/s-324KiB/s (228kB/s-332kB/s), io=60.5MiB (63.5MB), run=10042-10097msec 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:53.208 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.209 bdev_null0 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.209 [2024-11-25 13:33:49.928953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.209 bdev_null1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.209 { 00:34:53.209 "params": { 00:34:53.209 "name": "Nvme$subsystem", 00:34:53.209 "trtype": "$TEST_TRANSPORT", 00:34:53.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.209 "adrfam": "ipv4", 00:34:53.209 "trsvcid": "$NVMF_PORT", 00:34:53.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.209 "hdgst": ${hdgst:-false}, 00:34:53.209 "ddgst": ${ddgst:-false} 00:34:53.209 }, 00:34:53.209 "method": "bdev_nvme_attach_controller" 00:34:53.209 } 00:34:53.209 EOF 00:34:53.209 )") 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:53.209 { 00:34:53.209 "params": { 00:34:53.209 "name": "Nvme$subsystem", 00:34:53.209 "trtype": "$TEST_TRANSPORT", 00:34:53.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:53.209 "adrfam": "ipv4", 00:34:53.209 "trsvcid": "$NVMF_PORT", 00:34:53.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:53.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:53.209 "hdgst": ${hdgst:-false}, 00:34:53.209 "ddgst": ${ddgst:-false} 00:34:53.209 }, 00:34:53.209 "method": "bdev_nvme_attach_controller" 00:34:53.209 } 00:34:53.209 EOF 00:34:53.209 )") 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:53.209 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:53.210 13:33:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:53.210 "params": { 00:34:53.210 "name": "Nvme0", 00:34:53.210 "trtype": "tcp", 00:34:53.210 "traddr": "10.0.0.2", 00:34:53.210 "adrfam": "ipv4", 00:34:53.210 "trsvcid": "4420", 00:34:53.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:53.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:53.210 "hdgst": false, 00:34:53.210 "ddgst": false 00:34:53.210 }, 00:34:53.210 "method": "bdev_nvme_attach_controller" 00:34:53.210 },{ 00:34:53.210 "params": { 00:34:53.210 "name": "Nvme1", 00:34:53.210 "trtype": "tcp", 00:34:53.210 "traddr": "10.0.0.2", 00:34:53.210 "adrfam": "ipv4", 00:34:53.210 "trsvcid": "4420", 00:34:53.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:53.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:53.210 "hdgst": false, 00:34:53.210 "ddgst": false 00:34:53.210 }, 00:34:53.210 "method": "bdev_nvme_attach_controller" 00:34:53.210 }' 00:34:53.210 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.210 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.210 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:53.210 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:53.210 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:53.210 13:33:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:53.210 13:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:53.210 13:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:53.210 13:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:53.210 13:33:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:53.210 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:53.210 ... 00:34:53.210 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:53.210 ... 00:34:53.210 fio-3.35 00:34:53.210 Starting 4 threads 00:34:58.470 00:34:58.470 filename0: (groupid=0, jobs=1): err= 0: pid=3353803: Mon Nov 25 13:33:56 2024 00:34:58.470 read: IOPS=1882, BW=14.7MiB/s (15.4MB/s)(74.1MiB/5041msec) 00:34:58.470 slat (nsec): min=6500, max=54499, avg=12616.16, stdev=4939.34 00:34:58.470 clat (usec): min=728, max=41754, avg=4185.76, stdev=820.85 00:34:58.470 lat (usec): min=741, max=41766, avg=4198.38, stdev=820.85 00:34:58.470 clat percentiles (usec): 00:34:58.470 | 1.00th=[ 2671], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 3949], 00:34:58.470 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:34:58.470 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4752], 00:34:58.470 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 7373], 99.95th=[ 7504], 00:34:58.470 | 99.99th=[41681] 00:34:58.470 bw ( KiB/s): min=14864, max=15488, per=25.47%, avg=15179.00, stdev=203.28, samples=10 00:34:58.470 iops : min= 1858, max= 1936, avg=1897.30, stdev=25.48, samples=10 00:34:58.470 lat (usec) : 750=0.01%, 1000=0.04% 00:34:58.470 lat (msec) : 2=0.34%, 4=21.80%, 10=77.78%, 50=0.03% 00:34:58.470 cpu : usr=93.31%, sys=6.19%, ctx=8, majf=0, minf=107 00:34:58.470 IO depths : 1=0.7%, 2=12.7%, 4=59.5%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.470 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.470 issued rwts: total=9488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.470 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:58.470 filename0: (groupid=0, jobs=1): err= 0: pid=3353804: Mon Nov 25 13:33:56 2024 00:34:58.470 read: IOPS=1851, BW=14.5MiB/s (15.2MB/s)(72.3MiB/5002msec) 00:34:58.470 slat (nsec): min=6596, max=46503, avg=13466.23, stdev=4831.13 00:34:58.470 clat (usec): min=746, max=7813, avg=4269.84, stdev=634.47 00:34:58.470 lat (usec): min=758, max=7827, avg=4283.30, stdev=634.36 00:34:58.470 clat percentiles (usec): 00:34:58.470 | 1.00th=[ 2114], 5.00th=[ 3523], 10.00th=[ 3818], 20.00th=[ 4080], 00:34:58.470 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:34:58.470 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5342], 00:34:58.470 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 7439], 99.95th=[ 7570], 00:34:58.470 | 99.99th=[ 7832] 00:34:58.470 bw ( KiB/s): min=14224, max=15120, per=24.85%, avg=14805.33, stdev=243.84, samples=9 00:34:58.470 iops : min= 1778, max= 1890, avg=1850.67, stdev=30.48, samples=9 00:34:58.470 lat (usec) : 750=0.01%, 1000=0.10% 00:34:58.470 lat (msec) : 2=0.81%, 4=14.16%, 10=84.92% 00:34:58.470 cpu : usr=93.06%, sys=6.42%, ctx=9, majf=0, minf=72 00:34:58.470 IO depths : 1=0.8%, 2=17.0%, 4=56.4%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.470 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.470 issued rwts: total=9259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.470 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:58.470 filename1: (groupid=0, jobs=1): err= 0: pid=3353805: Mon Nov 25 13:33:56 2024 00:34:58.470 read: IOPS=1919, BW=15.0MiB/s (15.7MB/s)(75.0MiB/5003msec) 00:34:58.470 slat (nsec): min=4674, max=69772, avg=12815.40, stdev=4664.60 00:34:58.470 clat (usec): min=760, max=7602, avg=4117.15, stdev=548.66 00:34:58.470 lat (usec): min=772, max=7616, avg=4129.97, stdev=548.95 00:34:58.470 clat percentiles (usec): 00:34:58.470 | 1.00th=[ 2024], 5.00th=[ 3294], 10.00th=[ 3589], 20.00th=[ 3884], 00:34:58.470 | 30.00th=[ 4047], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:34:58.470 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4686], 00:34:58.470 | 99.00th=[ 5997], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7570], 00:34:58.470 | 99.99th=[ 7635] 00:34:58.470 bw ( KiB/s): min=14976, max=16624, per=25.77%, avg=15355.20, stdev=489.92, samples=10 00:34:58.470 iops : min= 1872, max= 2078, avg=1919.40, stdev=61.24, samples=10 00:34:58.470 lat (usec) : 1000=0.09% 00:34:58.470 lat (msec) : 2=0.86%, 4=23.76%, 10=75.28% 00:34:58.470 cpu : usr=92.36%, sys=7.12%, ctx=9, majf=0, minf=87 00:34:58.470 IO depths : 1=1.1%, 2=18.5%, 4=54.9%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.470 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.470 issued rwts: total=9605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.470 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:58.470 filename1: (groupid=0, jobs=1): err= 0: pid=3353806: Mon Nov 25 13:33:56 2024 00:34:58.470 read: IOPS=1838, BW=14.4MiB/s (15.1MB/s)(71.8MiB/5002msec) 00:34:58.470 slat (nsec): min=7391, max=46718, avg=13528.74, stdev=4969.61 00:34:58.470 clat (usec): min=761, max=8391, avg=4300.62, stdev=722.28 00:34:58.470 lat (usec): min=773, max=8412, avg=4314.14, stdev=721.98 00:34:58.470 clat percentiles (usec): 00:34:58.470 | 1.00th=[ 1565], 5.00th=[ 3589], 10.00th=[ 3851], 20.00th=[ 4080], 00:34:58.470 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:34:58.470 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5604], 00:34:58.470 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 7767], 99.95th=[ 8029], 00:34:58.470 | 99.99th=[ 8455] 00:34:58.470 bw ( KiB/s): min=14176, max=14976, per=24.61%, avg=14666.67, stdev=231.03, samples=9 00:34:58.470 iops : min= 1772, max= 1872, avg=1833.33, stdev=28.88, samples=9 00:34:58.471 lat (usec) : 1000=0.17% 00:34:58.471 lat (msec) : 2=1.14%, 4=12.48%, 10=86.21% 00:34:58.471 cpu : usr=93.12%, sys=6.36%, ctx=6, majf=0, minf=68 00:34:58.471 IO depths : 1=0.6%, 2=16.0%, 4=57.2%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.471 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.471 issued rwts: total=9194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:58.471 00:34:58.471 Run status group 0 (all jobs): 00:34:58.471 READ: bw=58.2MiB/s (61.0MB/s), 14.4MiB/s-15.0MiB/s (15.1MB/s-15.7MB/s), io=293MiB (308MB), run=5002-5041msec 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:58.728 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.985 00:34:58.985 real 0m24.718s 00:34:58.985 user 4m34.613s 00:34:58.985 sys 0m6.422s 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.985 13:33:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.985 ************************************ 00:34:58.985 END TEST fio_dif_rand_params 00:34:58.985 ************************************ 00:34:58.985 13:33:56 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:58.985 13:33:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:58.985 13:33:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.986 13:33:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.986 ************************************ 00:34:58.986 START TEST fio_dif_digest 00:34:58.986 ************************************ 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:58.986 bdev_null0 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:58.986 [2024-11-25 13:33:56.482176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.986 { 00:34:58.986 "params": { 00:34:58.986 "name": "Nvme$subsystem", 00:34:58.986 "trtype": "$TEST_TRANSPORT", 00:34:58.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.986 "adrfam": "ipv4", 00:34:58.986 "trsvcid": "$NVMF_PORT", 00:34:58.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.986 "hdgst": ${hdgst:-false}, 00:34:58.986 "ddgst": ${ddgst:-false} 00:34:58.986 }, 00:34:58.986 "method": "bdev_nvme_attach_controller" 00:34:58.986 } 00:34:58.986 EOF 00:34:58.986 )") 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:58.986 "params": { 00:34:58.986 "name": "Nvme0", 00:34:58.986 "trtype": "tcp", 00:34:58.986 "traddr": "10.0.0.2", 00:34:58.986 "adrfam": "ipv4", 00:34:58.986 "trsvcid": "4420", 00:34:58.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.986 "hdgst": true, 00:34:58.986 "ddgst": true 00:34:58.986 }, 00:34:58.986 "method": "bdev_nvme_attach_controller" 00:34:58.986 }' 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.986 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.987 13:33:56 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:59.244 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:59.244 ... 00:34:59.244 fio-3.35 00:34:59.244 Starting 3 threads 00:35:11.461 00:35:11.461 filename0: (groupid=0, jobs=1): err= 0: pid=3354673: Mon Nov 25 13:34:07 2024 00:35:11.461 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(250MiB/10046msec) 00:35:11.461 slat (nsec): min=5896, max=47418, avg=13700.26, stdev=3253.90 00:35:11.461 clat (usec): min=9107, max=48768, avg=15061.09, stdev=1472.68 00:35:11.461 lat (usec): min=9119, max=48781, avg=15074.79, stdev=1472.59 00:35:11.461 clat percentiles (usec): 00:35:11.461 | 1.00th=[10945], 5.00th=[13566], 10.00th=[13829], 20.00th=[14353], 00:35:11.461 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15008], 60.00th=[15270], 00:35:11.461 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:35:11.461 | 99.00th=[17433], 99.50th=[17957], 99.90th=[45876], 99.95th=[49021], 00:35:11.461 | 99.99th=[49021] 00:35:11.461 bw ( KiB/s): min=24576, max=26880, per=32.30%, avg=25523.20, stdev=532.47, samples=20 00:35:11.461 iops : min= 192, max= 210, avg=199.40, stdev= 4.16, samples=20 00:35:11.461 lat (msec) : 10=0.40%, 20=99.50%, 50=0.10% 00:35:11.461 cpu : usr=92.39%, sys=7.10%, ctx=20, majf=0, minf=113 00:35:11.461 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.461 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.461 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.461 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:11.461 filename0: (groupid=0, jobs=1): err= 0: pid=3354674: Mon Nov 25 13:34:07 2024 00:35:11.461 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(255MiB/10044msec) 00:35:11.461 slat (usec): min=6, max=375, avg=13.79, stdev= 8.61 00:35:11.461 clat (usec): min=8876, max=58103, avg=14740.29, stdev=2284.74 00:35:11.461 lat (usec): min=8888, max=58158, avg=14754.08, stdev=2284.95 00:35:11.461 clat percentiles (usec): 00:35:11.461 | 1.00th=[11994], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:35:11.462 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:35:11.462 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:35:11.462 | 99.00th=[16909], 99.50th=[17433], 99.90th=[57934], 99.95th=[57934], 00:35:11.462 | 99.99th=[57934] 00:35:11.462 bw ( KiB/s): min=23342, max=27136, per=32.99%, avg=26063.10, stdev=746.01, samples=20 00:35:11.462 iops : min= 182, max= 212, avg=203.60, stdev= 5.90, samples=20 00:35:11.462 lat (msec) : 10=0.44%, 20=99.31%, 100=0.25% 00:35:11.462 cpu : usr=91.14%, sys=8.34%, ctx=20, majf=0, minf=149 00:35:11.462 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.462 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.462 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:11.462 filename0: (groupid=0, jobs=1): err= 0: pid=3354675: Mon Nov 25 13:34:07 2024 00:35:11.462 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(271MiB/10045msec) 00:35:11.462 slat (nsec): min=7371, max=39115, avg=14077.57, stdev=3188.97 00:35:11.462 clat (usec): min=9113, max=55666, avg=13874.80, stdev=2153.36 00:35:11.462 lat (usec): min=9134, max=55679, avg=13888.88, stdev=2153.35 00:35:11.462 clat percentiles (usec): 00:35:11.462 | 1.00th=[11338], 5.00th=[12125], 10.00th=[12387], 20.00th=[12911], 00:35:11.462 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:35:11.462 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15401], 00:35:11.462 | 99.00th=[16319], 99.50th=[16909], 99.90th=[54264], 99.95th=[55313], 00:35:11.462 | 99.99th=[55837] 00:35:11.462 bw ( KiB/s): min=25344, max=28672, per=35.05%, avg=27696.35, stdev=644.55, samples=20 00:35:11.462 iops : min= 198, max= 224, avg=216.35, stdev= 5.02, samples=20 00:35:11.462 lat (msec) : 10=0.42%, 20=99.35%, 50=0.05%, 100=0.18% 00:35:11.462 cpu : usr=92.31%, sys=7.15%, ctx=22, majf=0, minf=194 00:35:11.462 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:11.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:11.462 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:11.462 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:11.462 00:35:11.462 Run status group 0 (all jobs): 00:35:11.462 READ: bw=77.2MiB/s (80.9MB/s), 24.8MiB/s-27.0MiB/s (26.0MB/s-28.3MB/s), io=775MiB (813MB), run=10044-10046msec 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.462 00:35:11.462 real 0m11.207s 00:35:11.462 user 0m28.934s 00:35:11.462 sys 0m2.527s 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:11.462 13:34:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:11.462 ************************************ 00:35:11.462 END TEST fio_dif_digest 00:35:11.462 ************************************ 00:35:11.462 13:34:07 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:11.462 13:34:07 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:11.462 rmmod nvme_tcp 00:35:11.462 rmmod nvme_fabrics 00:35:11.462 rmmod nvme_keyring 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3348259 ']' 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3348259 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3348259 ']' 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3348259 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3348259 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3348259' 00:35:11.462 killing process with pid 3348259 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3348259 00:35:11.462 13:34:07 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3348259 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:11.462 13:34:07 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:11.462 Waiting for block devices as requested 00:35:11.718 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:11.718 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:11.718 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:11.974 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:11.974 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:11.974 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:11.974 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:11.974 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:12.232 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:12.232 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:12.232 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:12.489 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:12.489 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:12.489 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:12.747 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:12.747 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:12.747 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:12.747 13:34:10 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:12.747 13:34:10 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:12.747 13:34:10 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:12.747 13:34:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:12.747 13:34:10 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:12.747 13:34:10 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:13.004 13:34:10 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:13.004 13:34:10 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:13.004 13:34:10 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.004 13:34:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:13.004 13:34:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.903 13:34:12 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:14.903 00:35:14.903 real 1m7.809s 00:35:14.903 user 6m32.619s 00:35:14.903 sys 0m17.915s 00:35:14.903 13:34:12 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.903 13:34:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:14.903 ************************************ 00:35:14.903 END TEST nvmf_dif 00:35:14.903 ************************************ 00:35:14.903 13:34:12 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:14.903 13:34:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:14.903 13:34:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:14.903 13:34:12 -- common/autotest_common.sh@10 -- # set +x 00:35:14.903 ************************************ 00:35:14.903 START TEST nvmf_abort_qd_sizes 00:35:14.903 ************************************ 00:35:14.903 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:14.903 * Looking for test storage... 00:35:15.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:15.161 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:15.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.162 --rc genhtml_branch_coverage=1 00:35:15.162 --rc genhtml_function_coverage=1 00:35:15.162 --rc genhtml_legend=1 00:35:15.162 --rc geninfo_all_blocks=1 00:35:15.162 --rc geninfo_unexecuted_blocks=1 00:35:15.162 00:35:15.162 ' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:15.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.162 --rc genhtml_branch_coverage=1 00:35:15.162 --rc genhtml_function_coverage=1 00:35:15.162 --rc genhtml_legend=1 00:35:15.162 --rc geninfo_all_blocks=1 00:35:15.162 --rc geninfo_unexecuted_blocks=1 00:35:15.162 00:35:15.162 ' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:15.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.162 --rc genhtml_branch_coverage=1 00:35:15.162 --rc genhtml_function_coverage=1 00:35:15.162 --rc genhtml_legend=1 00:35:15.162 --rc geninfo_all_blocks=1 00:35:15.162 --rc geninfo_unexecuted_blocks=1 00:35:15.162 00:35:15.162 ' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:15.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.162 --rc genhtml_branch_coverage=1 00:35:15.162 --rc genhtml_function_coverage=1 00:35:15.162 --rc genhtml_legend=1 00:35:15.162 --rc geninfo_all_blocks=1 00:35:15.162 --rc geninfo_unexecuted_blocks=1 00:35:15.162 00:35:15.162 ' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:15.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:15.162 13:34:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:17.690 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:17.691 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:17.691 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:17.691 Found net devices under 0000:09:00.0: cvl_0_0 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:17.691 Found net devices under 0000:09:00.1: cvl_0_1 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:17.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:17.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:35:17.691 00:35:17.691 --- 10.0.0.2 ping statistics --- 00:35:17.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.691 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:17.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:17.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:35:17.691 00:35:17.691 --- 10.0.0.1 ping statistics --- 00:35:17.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:17.691 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:17.691 13:34:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:18.626 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:18.626 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:18.626 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:18.626 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:18.626 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:18.626 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:18.626 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:18.626 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:18.626 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:18.626 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:18.626 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:18.626 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:18.626 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:18.626 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:18.626 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:18.626 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:19.561 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3359587 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3359587 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3359587 ']' 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.818 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:19.818 [2024-11-25 13:34:17.374156] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:35:19.819 [2024-11-25 13:34:17.374238] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.819 [2024-11-25 13:34:17.442035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:20.076 [2024-11-25 13:34:17.503454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:20.076 [2024-11-25 13:34:17.503500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:20.076 [2024-11-25 13:34:17.503515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:20.076 [2024-11-25 13:34:17.503528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:20.076 [2024-11-25 13:34:17.503538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:20.076 [2024-11-25 13:34:17.505040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.076 [2024-11-25 13:34:17.505128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:20.076 [2024-11-25 13:34:17.505196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:20.076 [2024-11-25 13:34:17.505200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.076 13:34:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:20.076 ************************************ 00:35:20.076 START TEST spdk_target_abort 00:35:20.076 ************************************ 00:35:20.076 13:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:20.076 13:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:20.076 13:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:35:20.076 13:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.076 13:34:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:23.351 spdk_targetn1 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:23.351 [2024-11-25 13:34:20.535402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:23.351 [2024-11-25 13:34:20.576664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:23.351 13:34:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:26.629 Initializing NVMe Controllers 00:35:26.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:26.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:26.629 Initialization complete. Launching workers. 00:35:26.629 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13121, failed: 0 00:35:26.629 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1194, failed to submit 11927 00:35:26.629 success 740, unsuccessful 454, failed 0 00:35:26.629 13:34:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:26.629 13:34:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:29.906 Initializing NVMe Controllers 00:35:29.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:29.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:29.906 Initialization complete. Launching workers. 00:35:29.906 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8894, failed: 0 00:35:29.906 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1226, failed to submit 7668 00:35:29.906 success 329, unsuccessful 897, failed 0 00:35:29.906 13:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:29.906 13:34:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:33.183 Initializing NVMe Controllers 00:35:33.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:33.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:33.183 Initialization complete. Launching workers. 00:35:33.183 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31007, failed: 0 00:35:33.183 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2601, failed to submit 28406 00:35:33.183 success 496, unsuccessful 2105, failed 0 00:35:33.183 13:34:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:33.183 13:34:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.183 13:34:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:33.183 13:34:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.183 13:34:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:33.183 13:34:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.183 13:34:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3359587 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3359587 ']' 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3359587 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3359587 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3359587' 00:35:34.115 killing process with pid 3359587 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3359587 00:35:34.115 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3359587 00:35:34.373 00:35:34.373 real 0m14.157s 00:35:34.373 user 0m53.663s 00:35:34.373 sys 0m2.664s 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:34.373 ************************************ 00:35:34.373 END TEST spdk_target_abort 00:35:34.373 ************************************ 00:35:34.373 13:34:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:34.373 13:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:34.373 13:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.373 13:34:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:34.373 ************************************ 00:35:34.373 START TEST kernel_target_abort 00:35:34.373 ************************************ 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:34.373 13:34:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:35.305 Waiting for block devices as requested 00:35:35.305 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:35.562 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:35.562 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:35.562 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:35.834 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:35.834 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:35.834 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:35.834 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:36.149 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:36.149 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:36.420 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:36.420 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:36.421 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:36.421 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:36.421 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:36.678 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:36.678 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:36.678 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:36.935 No valid GPT data, bailing 00:35:36.935 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:35:36.936 00:35:36.936 Discovery Log Number of Records 2, Generation counter 2 00:35:36.936 =====Discovery Log Entry 0====== 00:35:36.936 trtype: tcp 00:35:36.936 adrfam: ipv4 00:35:36.936 subtype: current discovery subsystem 00:35:36.936 treq: not specified, sq flow control disable supported 00:35:36.936 portid: 1 00:35:36.936 trsvcid: 4420 00:35:36.936 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:36.936 traddr: 10.0.0.1 00:35:36.936 eflags: none 00:35:36.936 sectype: none 00:35:36.936 =====Discovery Log Entry 1====== 00:35:36.936 trtype: tcp 00:35:36.936 adrfam: ipv4 00:35:36.936 subtype: nvme subsystem 00:35:36.936 treq: not specified, sq flow control disable supported 00:35:36.936 portid: 1 00:35:36.936 trsvcid: 4420 00:35:36.936 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:36.936 traddr: 10.0.0.1 00:35:36.936 eflags: none 00:35:36.936 sectype: none 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:36.936 13:34:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.231 Initializing NVMe Controllers 00:35:40.231 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:40.231 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:40.231 Initialization complete. Launching workers. 00:35:40.231 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48865, failed: 0 00:35:40.231 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48865, failed to submit 0 00:35:40.231 success 0, unsuccessful 48865, failed 0 00:35:40.231 13:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:40.231 13:34:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:43.507 Initializing NVMe Controllers 00:35:43.507 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:43.507 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:43.507 Initialization complete. Launching workers. 00:35:43.507 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95303, failed: 0 00:35:43.507 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21314, failed to submit 73989 00:35:43.507 success 0, unsuccessful 21314, failed 0 00:35:43.507 13:34:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:43.507 13:34:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:46.785 Initializing NVMe Controllers 00:35:46.785 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:46.785 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:46.785 Initialization complete. Launching workers. 00:35:46.785 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87667, failed: 0 00:35:46.785 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21914, failed to submit 65753 00:35:46.785 success 0, unsuccessful 21914, failed 0 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:46.785 13:34:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:47.350 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:47.609 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:47.609 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:47.609 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:47.609 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:47.609 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:47.609 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:47.609 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:47.609 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:47.609 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:47.609 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:47.609 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:47.609 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:47.609 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:47.609 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:47.609 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:48.582 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:48.841 00:35:48.841 real 0m14.356s 00:35:48.841 user 0m6.128s 00:35:48.841 sys 0m3.375s 00:35:48.841 13:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.841 13:34:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:48.841 ************************************ 00:35:48.841 END TEST kernel_target_abort 00:35:48.841 ************************************ 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:48.841 rmmod nvme_tcp 00:35:48.841 rmmod nvme_fabrics 00:35:48.841 rmmod nvme_keyring 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3359587 ']' 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3359587 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3359587 ']' 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3359587 00:35:48.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3359587) - No such process 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3359587 is not found' 00:35:48.841 Process with pid 3359587 is not found 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:48.841 13:34:46 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:50.216 Waiting for block devices as requested 00:35:50.216 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:50.216 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:50.216 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:50.216 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:50.216 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:50.475 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:50.475 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:50.475 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:50.733 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:50.733 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:50.733 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:50.991 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:50.991 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:50.991 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:50.991 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:51.249 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:51.249 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:51.507 13:34:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:53.426 13:34:50 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:53.426 00:35:53.426 real 0m38.451s 00:35:53.426 user 1m2.083s 00:35:53.426 sys 0m9.734s 00:35:53.426 13:34:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.426 13:34:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.426 ************************************ 00:35:53.426 END TEST nvmf_abort_qd_sizes 00:35:53.426 ************************************ 00:35:53.426 13:34:50 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:53.426 13:34:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:53.426 13:34:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:53.426 13:34:50 -- common/autotest_common.sh@10 -- # set +x 00:35:53.426 ************************************ 00:35:53.426 START TEST keyring_file 00:35:53.426 ************************************ 00:35:53.426 13:34:51 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:53.426 * Looking for test storage... 00:35:53.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:53.426 13:34:51 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:53.426 13:34:51 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:53.426 13:34:51 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:53.685 13:34:51 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:53.685 13:34:51 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:53.685 13:34:51 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:53.685 13:34:51 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:53.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.685 --rc genhtml_branch_coverage=1 00:35:53.685 --rc genhtml_function_coverage=1 00:35:53.685 --rc genhtml_legend=1 00:35:53.685 --rc geninfo_all_blocks=1 00:35:53.685 --rc geninfo_unexecuted_blocks=1 00:35:53.685 00:35:53.685 ' 00:35:53.685 13:34:51 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:53.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.685 --rc genhtml_branch_coverage=1 00:35:53.685 --rc genhtml_function_coverage=1 00:35:53.685 --rc genhtml_legend=1 00:35:53.685 --rc geninfo_all_blocks=1 00:35:53.685 --rc geninfo_unexecuted_blocks=1 00:35:53.685 00:35:53.685 ' 00:35:53.685 13:34:51 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:53.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.685 --rc genhtml_branch_coverage=1 00:35:53.685 --rc genhtml_function_coverage=1 00:35:53.685 --rc genhtml_legend=1 00:35:53.685 --rc geninfo_all_blocks=1 00:35:53.685 --rc geninfo_unexecuted_blocks=1 00:35:53.685 00:35:53.685 ' 00:35:53.685 13:34:51 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:53.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:53.685 --rc genhtml_branch_coverage=1 00:35:53.685 --rc genhtml_function_coverage=1 00:35:53.685 --rc genhtml_legend=1 00:35:53.685 --rc geninfo_all_blocks=1 00:35:53.685 --rc geninfo_unexecuted_blocks=1 00:35:53.685 00:35:53.685 ' 00:35:53.685 13:34:51 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:53.686 13:34:51 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:53.686 13:34:51 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:53.686 13:34:51 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:53.686 13:34:51 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:53.686 13:34:51 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.686 13:34:51 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.686 13:34:51 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.686 13:34:51 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:53.686 13:34:51 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:53.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CGIYimiAsI 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CGIYimiAsI 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CGIYimiAsI 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.CGIYimiAsI 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rniLrLH8Mj 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:53.686 13:34:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rniLrLH8Mj 00:35:53.686 13:34:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rniLrLH8Mj 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.rniLrLH8Mj 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@30 -- # tgtpid=3365383 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:53.686 13:34:51 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3365383 00:35:53.686 13:34:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3365383 ']' 00:35:53.686 13:34:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.686 13:34:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:53.686 13:34:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.686 13:34:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:53.686 13:34:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:53.686 [2024-11-25 13:34:51.303048] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:35:53.686 [2024-11-25 13:34:51.303142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3365383 ] 00:35:53.945 [2024-11-25 13:34:51.372775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.945 [2024-11-25 13:34:51.438216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:54.203 13:34:51 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:54.203 [2024-11-25 13:34:51.726329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.203 null0 00:35:54.203 [2024-11-25 13:34:51.758399] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:54.203 [2024-11-25 13:34:51.758941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.203 13:34:51 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:54.203 [2024-11-25 13:34:51.786442] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:54.203 request: 00:35:54.203 { 00:35:54.203 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.203 "secure_channel": false, 00:35:54.203 "listen_address": { 00:35:54.203 "trtype": "tcp", 00:35:54.203 "traddr": "127.0.0.1", 00:35:54.203 "trsvcid": "4420" 00:35:54.203 }, 00:35:54.203 "method": "nvmf_subsystem_add_listener", 00:35:54.203 "req_id": 1 00:35:54.203 } 00:35:54.203 Got JSON-RPC error response 00:35:54.203 response: 00:35:54.203 { 00:35:54.203 "code": -32602, 00:35:54.203 "message": "Invalid parameters" 00:35:54.203 } 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:54.203 13:34:51 keyring_file -- keyring/file.sh@47 -- # bperfpid=3365396 00:35:54.203 13:34:51 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3365396 /var/tmp/bperf.sock 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3365396 ']' 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.203 13:34:51 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:54.203 13:34:51 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:54.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:54.204 13:34:51 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.204 13:34:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:54.204 [2024-11-25 13:34:51.837135] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:35:54.204 [2024-11-25 13:34:51.837208] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3365396 ] 00:35:54.462 [2024-11-25 13:34:51.904686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.462 [2024-11-25 13:34:51.965088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:54.462 13:34:52 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:54.462 13:34:52 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:54.462 13:34:52 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CGIYimiAsI 00:35:54.462 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CGIYimiAsI 00:35:54.720 13:34:52 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rniLrLH8Mj 00:35:54.720 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rniLrLH8Mj 00:35:54.977 13:34:52 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:54.977 13:34:52 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:54.977 13:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:54.977 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:54.977 13:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.234 13:34:52 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.CGIYimiAsI == \/\t\m\p\/\t\m\p\.\C\G\I\Y\i\m\i\A\s\I ]] 00:35:55.234 13:34:52 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:55.234 13:34:52 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:55.234 13:34:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.234 13:34:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.234 13:34:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:55.798 13:34:53 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.rniLrLH8Mj == \/\t\m\p\/\t\m\p\.\r\n\i\L\r\L\H\8\M\j ]] 00:35:55.798 13:34:53 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.798 13:34:53 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:55.798 13:34:53 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:55.798 13:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.056 13:34:53 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:56.056 13:34:53 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:56.056 13:34:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:56.313 [2024-11-25 13:34:53.949512] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:56.570 nvme0n1 00:35:56.570 13:34:54 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:56.570 13:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:56.570 13:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:56.570 13:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:56.570 13:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.570 13:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:56.827 13:34:54 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:56.827 13:34:54 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:56.827 13:34:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:56.827 13:34:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:56.827 13:34:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:56.827 13:34:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:56.827 13:34:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:57.084 13:34:54 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:57.084 13:34:54 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:57.084 Running I/O for 1 seconds... 00:35:58.452 10460.00 IOPS, 40.86 MiB/s 00:35:58.452 Latency(us) 00:35:58.452 [2024-11-25T12:34:56.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.452 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:58.452 nvme0n1 : 1.01 10509.82 41.05 0.00 0.00 12143.99 6941.96 20680.25 00:35:58.452 [2024-11-25T12:34:56.111Z] =================================================================================================================== 00:35:58.452 [2024-11-25T12:34:56.111Z] Total : 10509.82 41.05 0.00 0.00 12143.99 6941.96 20680.25 00:35:58.452 { 00:35:58.452 "results": [ 00:35:58.452 { 00:35:58.452 "job": "nvme0n1", 00:35:58.452 "core_mask": "0x2", 00:35:58.452 "workload": "randrw", 00:35:58.452 "percentage": 50, 00:35:58.452 "status": "finished", 00:35:58.452 "queue_depth": 128, 00:35:58.452 "io_size": 4096, 00:35:58.452 "runtime": 1.007439, 00:35:58.452 "iops": 10509.817467856614, 00:35:58.452 "mibps": 41.0539744838149, 00:35:58.452 "io_failed": 0, 00:35:58.452 "io_timeout": 0, 00:35:58.452 "avg_latency_us": 12143.98980033301, 00:35:58.452 "min_latency_us": 6941.961481481481, 00:35:58.452 "max_latency_us": 20680.248888888887 00:35:58.452 } 00:35:58.452 ], 00:35:58.452 "core_count": 1 00:35:58.452 } 00:35:58.452 13:34:55 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:58.452 13:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:58.452 13:34:55 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:58.452 13:34:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:58.452 13:34:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:58.452 13:34:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.452 13:34:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.452 13:34:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:58.727 13:34:56 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:58.727 13:34:56 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:58.727 13:34:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:58.727 13:34:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:58.727 13:34:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.727 13:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.727 13:34:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:58.984 13:34:56 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:58.984 13:34:56 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:58.984 13:34:56 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:58.984 13:34:56 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:58.984 13:34:56 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:58.984 13:34:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:58.984 13:34:56 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:58.984 13:34:56 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:58.984 13:34:56 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:58.984 13:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:59.242 [2024-11-25 13:34:56.795693] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:59.242 [2024-11-25 13:34:56.796140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228cf10 (107): Transport endpoint is not connected 00:35:59.242 [2024-11-25 13:34:56.797130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228cf10 (9): Bad file descriptor 00:35:59.242 [2024-11-25 13:34:56.798129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:59.242 [2024-11-25 13:34:56.798147] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:59.242 [2024-11-25 13:34:56.798173] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:59.242 [2024-11-25 13:34:56.798188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:59.242 request: 00:35:59.242 { 00:35:59.242 "name": "nvme0", 00:35:59.242 "trtype": "tcp", 00:35:59.242 "traddr": "127.0.0.1", 00:35:59.242 "adrfam": "ipv4", 00:35:59.242 "trsvcid": "4420", 00:35:59.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:59.242 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:59.242 "prchk_reftag": false, 00:35:59.242 "prchk_guard": false, 00:35:59.242 "hdgst": false, 00:35:59.242 "ddgst": false, 00:35:59.242 "psk": "key1", 00:35:59.242 "allow_unrecognized_csi": false, 00:35:59.242 "method": "bdev_nvme_attach_controller", 00:35:59.242 "req_id": 1 00:35:59.242 } 00:35:59.242 Got JSON-RPC error response 00:35:59.242 response: 00:35:59.242 { 00:35:59.242 "code": -5, 00:35:59.242 "message": "Input/output error" 00:35:59.242 } 00:35:59.242 13:34:56 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:59.242 13:34:56 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:59.242 13:34:56 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:59.242 13:34:56 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:59.242 13:34:56 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:59.242 13:34:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:59.242 13:34:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.242 13:34:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.242 13:34:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.242 13:34:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:59.499 13:34:57 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:59.499 13:34:57 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:59.499 13:34:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:59.499 13:34:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.499 13:34:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.499 13:34:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:59.499 13:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.756 13:34:57 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:59.757 13:34:57 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:59.757 13:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:00.017 13:34:57 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:00.017 13:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:00.318 13:34:57 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:00.318 13:34:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:00.318 13:34:57 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:00.575 13:34:58 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:00.575 13:34:58 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.CGIYimiAsI 00:36:00.575 13:34:58 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.CGIYimiAsI 00:36:00.575 13:34:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:00.575 13:34:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.CGIYimiAsI 00:36:00.575 13:34:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:00.575 13:34:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.575 13:34:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:00.575 13:34:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.575 13:34:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CGIYimiAsI 00:36:00.575 13:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CGIYimiAsI 00:36:00.832 [2024-11-25 13:34:58.440132] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CGIYimiAsI': 0100660 00:36:00.832 [2024-11-25 13:34:58.440167] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:00.832 request: 00:36:00.832 { 00:36:00.832 "name": "key0", 00:36:00.832 "path": "/tmp/tmp.CGIYimiAsI", 00:36:00.832 "method": "keyring_file_add_key", 00:36:00.832 "req_id": 1 00:36:00.832 } 00:36:00.832 Got JSON-RPC error response 00:36:00.832 response: 00:36:00.832 { 00:36:00.832 "code": -1, 00:36:00.832 "message": "Operation not permitted" 00:36:00.832 } 00:36:00.832 13:34:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:00.832 13:34:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:00.832 13:34:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:00.832 13:34:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:00.832 13:34:58 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.CGIYimiAsI 00:36:00.832 13:34:58 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CGIYimiAsI 00:36:00.832 13:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CGIYimiAsI 00:36:01.088 13:34:58 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.CGIYimiAsI 00:36:01.088 13:34:58 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:01.088 13:34:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:01.088 13:34:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:01.088 13:34:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:01.088 13:34:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:01.088 13:34:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:01.653 13:34:59 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:01.653 13:34:59 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:01.653 13:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:01.653 [2024-11-25 13:34:59.254419] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.CGIYimiAsI': No such file or directory 00:36:01.653 [2024-11-25 13:34:59.254455] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:01.653 [2024-11-25 13:34:59.254494] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:01.653 [2024-11-25 13:34:59.254509] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:01.653 [2024-11-25 13:34:59.254522] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:01.653 [2024-11-25 13:34:59.254535] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:01.653 request: 00:36:01.653 { 00:36:01.653 "name": "nvme0", 00:36:01.653 "trtype": "tcp", 00:36:01.653 "traddr": "127.0.0.1", 00:36:01.653 "adrfam": "ipv4", 00:36:01.653 "trsvcid": "4420", 00:36:01.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:01.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:01.653 "prchk_reftag": false, 00:36:01.653 "prchk_guard": false, 00:36:01.653 "hdgst": false, 00:36:01.653 "ddgst": false, 00:36:01.653 "psk": "key0", 00:36:01.653 "allow_unrecognized_csi": false, 00:36:01.653 "method": "bdev_nvme_attach_controller", 00:36:01.653 "req_id": 1 00:36:01.653 } 00:36:01.653 Got JSON-RPC error response 00:36:01.653 response: 00:36:01.653 { 00:36:01.653 "code": -19, 00:36:01.653 "message": "No such device" 00:36:01.653 } 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:01.653 13:34:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:01.653 13:34:59 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:01.654 13:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:01.911 13:34:59 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:01.911 13:34:59 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:01.911 13:34:59 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:01.911 13:34:59 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:01.911 13:34:59 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:01.911 13:34:59 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:01.911 13:34:59 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FwjuwZBBV5 00:36:01.911 13:34:59 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:01.911 13:34:59 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:01.911 13:34:59 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:01.911 13:34:59 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:01.911 13:34:59 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:01.911 13:34:59 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:01.911 13:34:59 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:02.168 13:34:59 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FwjuwZBBV5 00:36:02.168 13:34:59 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FwjuwZBBV5 00:36:02.168 13:34:59 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.FwjuwZBBV5 00:36:02.168 13:34:59 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FwjuwZBBV5 00:36:02.168 13:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FwjuwZBBV5 00:36:02.426 13:34:59 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:02.426 13:34:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:02.684 nvme0n1 00:36:02.684 13:35:00 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:02.684 13:35:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:02.684 13:35:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.684 13:35:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.684 13:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.684 13:35:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:02.942 13:35:00 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:02.942 13:35:00 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:02.942 13:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:03.199 13:35:00 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:03.199 13:35:00 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:03.199 13:35:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:03.199 13:35:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.199 13:35:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:03.456 13:35:01 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:03.456 13:35:01 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:03.456 13:35:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:03.456 13:35:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:03.456 13:35:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:03.456 13:35:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:03.456 13:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.713 13:35:01 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:03.713 13:35:01 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:03.713 13:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:03.971 13:35:01 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:03.971 13:35:01 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:03.971 13:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.228 13:35:01 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:04.228 13:35:01 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FwjuwZBBV5 00:36:04.228 13:35:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FwjuwZBBV5 00:36:04.486 13:35:02 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.rniLrLH8Mj 00:36:04.486 13:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.rniLrLH8Mj 00:36:05.049 13:35:02 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.049 13:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.307 nvme0n1 00:36:05.307 13:35:02 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:05.307 13:35:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:05.565 13:35:03 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:05.565 "subsystems": [ 00:36:05.565 { 00:36:05.565 "subsystem": "keyring", 00:36:05.565 "config": [ 00:36:05.565 { 00:36:05.565 "method": "keyring_file_add_key", 00:36:05.565 "params": { 00:36:05.565 "name": "key0", 00:36:05.565 "path": "/tmp/tmp.FwjuwZBBV5" 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "keyring_file_add_key", 00:36:05.565 "params": { 00:36:05.565 "name": "key1", 00:36:05.565 "path": "/tmp/tmp.rniLrLH8Mj" 00:36:05.565 } 00:36:05.565 } 00:36:05.565 ] 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "subsystem": "iobuf", 00:36:05.565 "config": [ 00:36:05.565 { 00:36:05.565 "method": "iobuf_set_options", 00:36:05.565 "params": { 00:36:05.565 "small_pool_count": 8192, 00:36:05.565 "large_pool_count": 1024, 00:36:05.565 "small_bufsize": 8192, 00:36:05.565 "large_bufsize": 135168, 00:36:05.565 "enable_numa": false 00:36:05.565 } 00:36:05.565 } 00:36:05.565 ] 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "subsystem": "sock", 00:36:05.565 "config": [ 00:36:05.565 { 00:36:05.565 "method": "sock_set_default_impl", 00:36:05.565 "params": { 00:36:05.565 "impl_name": "posix" 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "sock_impl_set_options", 00:36:05.565 "params": { 00:36:05.565 "impl_name": "ssl", 00:36:05.565 "recv_buf_size": 4096, 00:36:05.565 "send_buf_size": 4096, 00:36:05.565 "enable_recv_pipe": true, 00:36:05.565 "enable_quickack": false, 00:36:05.565 "enable_placement_id": 0, 00:36:05.565 "enable_zerocopy_send_server": true, 00:36:05.565 "enable_zerocopy_send_client": false, 00:36:05.565 "zerocopy_threshold": 0, 00:36:05.565 "tls_version": 0, 00:36:05.565 "enable_ktls": false 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "sock_impl_set_options", 00:36:05.565 "params": { 00:36:05.565 "impl_name": "posix", 00:36:05.565 "recv_buf_size": 2097152, 00:36:05.565 "send_buf_size": 2097152, 00:36:05.565 "enable_recv_pipe": true, 00:36:05.565 "enable_quickack": false, 00:36:05.565 "enable_placement_id": 0, 00:36:05.565 "enable_zerocopy_send_server": true, 00:36:05.565 "enable_zerocopy_send_client": false, 00:36:05.565 "zerocopy_threshold": 0, 00:36:05.565 "tls_version": 0, 00:36:05.565 "enable_ktls": false 00:36:05.565 } 00:36:05.565 } 00:36:05.565 ] 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "subsystem": "vmd", 00:36:05.565 "config": [] 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "subsystem": "accel", 00:36:05.565 "config": [ 00:36:05.565 { 00:36:05.565 "method": "accel_set_options", 00:36:05.565 "params": { 00:36:05.565 "small_cache_size": 128, 00:36:05.565 "large_cache_size": 16, 00:36:05.565 "task_count": 2048, 00:36:05.565 "sequence_count": 2048, 00:36:05.565 "buf_count": 2048 00:36:05.565 } 00:36:05.565 } 00:36:05.565 ] 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "subsystem": "bdev", 00:36:05.565 "config": [ 00:36:05.565 { 00:36:05.565 "method": "bdev_set_options", 00:36:05.565 "params": { 00:36:05.565 "bdev_io_pool_size": 65535, 00:36:05.565 "bdev_io_cache_size": 256, 00:36:05.565 "bdev_auto_examine": true, 00:36:05.565 "iobuf_small_cache_size": 128, 00:36:05.565 "iobuf_large_cache_size": 16 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "bdev_raid_set_options", 00:36:05.565 "params": { 00:36:05.565 "process_window_size_kb": 1024, 00:36:05.565 "process_max_bandwidth_mb_sec": 0 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "bdev_iscsi_set_options", 00:36:05.565 "params": { 00:36:05.565 "timeout_sec": 30 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "bdev_nvme_set_options", 00:36:05.565 "params": { 00:36:05.565 "action_on_timeout": "none", 00:36:05.565 "timeout_us": 0, 00:36:05.565 "timeout_admin_us": 0, 00:36:05.565 "keep_alive_timeout_ms": 10000, 00:36:05.565 "arbitration_burst": 0, 00:36:05.565 "low_priority_weight": 0, 00:36:05.565 "medium_priority_weight": 0, 00:36:05.565 "high_priority_weight": 0, 00:36:05.565 "nvme_adminq_poll_period_us": 10000, 00:36:05.565 "nvme_ioq_poll_period_us": 0, 00:36:05.565 "io_queue_requests": 512, 00:36:05.565 "delay_cmd_submit": true, 00:36:05.565 "transport_retry_count": 4, 00:36:05.565 "bdev_retry_count": 3, 00:36:05.565 "transport_ack_timeout": 0, 00:36:05.565 "ctrlr_loss_timeout_sec": 0, 00:36:05.565 "reconnect_delay_sec": 0, 00:36:05.565 "fast_io_fail_timeout_sec": 0, 00:36:05.565 "disable_auto_failback": false, 00:36:05.565 "generate_uuids": false, 00:36:05.565 "transport_tos": 0, 00:36:05.565 "nvme_error_stat": false, 00:36:05.565 "rdma_srq_size": 0, 00:36:05.565 "io_path_stat": false, 00:36:05.565 "allow_accel_sequence": false, 00:36:05.565 "rdma_max_cq_size": 0, 00:36:05.565 "rdma_cm_event_timeout_ms": 0, 00:36:05.565 "dhchap_digests": [ 00:36:05.565 "sha256", 00:36:05.565 "sha384", 00:36:05.565 "sha512" 00:36:05.565 ], 00:36:05.565 "dhchap_dhgroups": [ 00:36:05.565 "null", 00:36:05.565 "ffdhe2048", 00:36:05.565 "ffdhe3072", 00:36:05.565 "ffdhe4096", 00:36:05.565 "ffdhe6144", 00:36:05.565 "ffdhe8192" 00:36:05.565 ] 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "bdev_nvme_attach_controller", 00:36:05.565 "params": { 00:36:05.565 "name": "nvme0", 00:36:05.565 "trtype": "TCP", 00:36:05.565 "adrfam": "IPv4", 00:36:05.565 "traddr": "127.0.0.1", 00:36:05.565 "trsvcid": "4420", 00:36:05.565 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.565 "prchk_reftag": false, 00:36:05.565 "prchk_guard": false, 00:36:05.565 "ctrlr_loss_timeout_sec": 0, 00:36:05.565 "reconnect_delay_sec": 0, 00:36:05.565 "fast_io_fail_timeout_sec": 0, 00:36:05.565 "psk": "key0", 00:36:05.565 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:05.565 "hdgst": false, 00:36:05.565 "ddgst": false, 00:36:05.565 "multipath": "multipath" 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "bdev_nvme_set_hotplug", 00:36:05.565 "params": { 00:36:05.565 "period_us": 100000, 00:36:05.565 "enable": false 00:36:05.565 } 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "method": "bdev_wait_for_examine" 00:36:05.565 } 00:36:05.565 ] 00:36:05.565 }, 00:36:05.565 { 00:36:05.565 "subsystem": "nbd", 00:36:05.565 "config": [] 00:36:05.565 } 00:36:05.565 ] 00:36:05.565 }' 00:36:05.565 13:35:03 keyring_file -- keyring/file.sh@115 -- # killprocess 3365396 00:36:05.565 13:35:03 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3365396 ']' 00:36:05.565 13:35:03 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3365396 00:36:05.565 13:35:03 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:05.566 13:35:03 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:05.566 13:35:03 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3365396 00:36:05.566 13:35:03 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:05.566 13:35:03 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:05.566 13:35:03 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3365396' 00:36:05.566 killing process with pid 3365396 00:36:05.566 13:35:03 keyring_file -- common/autotest_common.sh@973 -- # kill 3365396 00:36:05.566 Received shutdown signal, test time was about 1.000000 seconds 00:36:05.566 00:36:05.566 Latency(us) 00:36:05.566 [2024-11-25T12:35:03.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.566 [2024-11-25T12:35:03.225Z] =================================================================================================================== 00:36:05.566 [2024-11-25T12:35:03.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:05.566 13:35:03 keyring_file -- common/autotest_common.sh@978 -- # wait 3365396 00:36:05.824 13:35:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=3366933 00:36:05.824 13:35:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3366933 /var/tmp/bperf.sock 00:36:05.824 13:35:03 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3366933 ']' 00:36:05.824 13:35:03 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:05.824 13:35:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:05.824 13:35:03 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.824 13:35:03 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:05.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:05.824 13:35:03 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.824 13:35:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:05.824 13:35:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:05.824 "subsystems": [ 00:36:05.824 { 00:36:05.824 "subsystem": "keyring", 00:36:05.824 "config": [ 00:36:05.824 { 00:36:05.824 "method": "keyring_file_add_key", 00:36:05.824 "params": { 00:36:05.824 "name": "key0", 00:36:05.824 "path": "/tmp/tmp.FwjuwZBBV5" 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "keyring_file_add_key", 00:36:05.824 "params": { 00:36:05.824 "name": "key1", 00:36:05.824 "path": "/tmp/tmp.rniLrLH8Mj" 00:36:05.824 } 00:36:05.824 } 00:36:05.824 ] 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "subsystem": "iobuf", 00:36:05.824 "config": [ 00:36:05.824 { 00:36:05.824 "method": "iobuf_set_options", 00:36:05.824 "params": { 00:36:05.824 "small_pool_count": 8192, 00:36:05.824 "large_pool_count": 1024, 00:36:05.824 "small_bufsize": 8192, 00:36:05.824 "large_bufsize": 135168, 00:36:05.824 "enable_numa": false 00:36:05.824 } 00:36:05.824 } 00:36:05.824 ] 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "subsystem": "sock", 00:36:05.824 "config": [ 00:36:05.824 { 00:36:05.824 "method": "sock_set_default_impl", 00:36:05.824 "params": { 00:36:05.824 "impl_name": "posix" 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "sock_impl_set_options", 00:36:05.824 "params": { 00:36:05.824 "impl_name": "ssl", 00:36:05.824 "recv_buf_size": 4096, 00:36:05.824 "send_buf_size": 4096, 00:36:05.824 "enable_recv_pipe": true, 00:36:05.824 "enable_quickack": false, 00:36:05.824 "enable_placement_id": 0, 00:36:05.824 "enable_zerocopy_send_server": true, 00:36:05.824 "enable_zerocopy_send_client": false, 00:36:05.824 "zerocopy_threshold": 0, 00:36:05.824 "tls_version": 0, 00:36:05.824 "enable_ktls": false 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "sock_impl_set_options", 00:36:05.824 "params": { 00:36:05.824 "impl_name": "posix", 00:36:05.824 "recv_buf_size": 2097152, 00:36:05.824 "send_buf_size": 2097152, 00:36:05.824 "enable_recv_pipe": true, 00:36:05.824 "enable_quickack": false, 00:36:05.824 "enable_placement_id": 0, 00:36:05.824 "enable_zerocopy_send_server": true, 00:36:05.824 "enable_zerocopy_send_client": false, 00:36:05.824 "zerocopy_threshold": 0, 00:36:05.824 "tls_version": 0, 00:36:05.824 "enable_ktls": false 00:36:05.824 } 00:36:05.824 } 00:36:05.824 ] 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "subsystem": "vmd", 00:36:05.824 "config": [] 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "subsystem": "accel", 00:36:05.824 "config": [ 00:36:05.824 { 00:36:05.824 "method": "accel_set_options", 00:36:05.824 "params": { 00:36:05.824 "small_cache_size": 128, 00:36:05.824 "large_cache_size": 16, 00:36:05.824 "task_count": 2048, 00:36:05.824 "sequence_count": 2048, 00:36:05.824 "buf_count": 2048 00:36:05.824 } 00:36:05.824 } 00:36:05.824 ] 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "subsystem": "bdev", 00:36:05.824 "config": [ 00:36:05.824 { 00:36:05.824 "method": "bdev_set_options", 00:36:05.824 "params": { 00:36:05.824 "bdev_io_pool_size": 65535, 00:36:05.824 "bdev_io_cache_size": 256, 00:36:05.824 "bdev_auto_examine": true, 00:36:05.824 "iobuf_small_cache_size": 128, 00:36:05.824 "iobuf_large_cache_size": 16 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "bdev_raid_set_options", 00:36:05.824 "params": { 00:36:05.824 "process_window_size_kb": 1024, 00:36:05.824 "process_max_bandwidth_mb_sec": 0 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "bdev_iscsi_set_options", 00:36:05.824 "params": { 00:36:05.824 "timeout_sec": 30 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "bdev_nvme_set_options", 00:36:05.824 "params": { 00:36:05.824 "action_on_timeout": "none", 00:36:05.824 "timeout_us": 0, 00:36:05.824 "timeout_admin_us": 0, 00:36:05.824 "keep_alive_timeout_ms": 10000, 00:36:05.824 "arbitration_burst": 0, 00:36:05.824 "low_priority_weight": 0, 00:36:05.824 "medium_priority_weight": 0, 00:36:05.824 "high_priority_weight": 0, 00:36:05.824 "nvme_adminq_poll_period_us": 10000, 00:36:05.824 "nvme_ioq_poll_period_us": 0, 00:36:05.824 "io_queue_requests": 512, 00:36:05.824 "delay_cmd_submit": true, 00:36:05.824 "transport_retry_count": 4, 00:36:05.824 "bdev_retry_count": 3, 00:36:05.824 "transport_ack_timeout": 0, 00:36:05.824 "ctrlr_loss_timeout_sec": 0, 00:36:05.824 "reconnect_delay_sec": 0, 00:36:05.824 "fast_io_fail_timeout_sec": 0, 00:36:05.824 "disable_auto_failback": false, 00:36:05.824 "generate_uuids": false, 00:36:05.824 "transport_tos": 0, 00:36:05.824 "nvme_error_stat": false, 00:36:05.824 "rdma_srq_size": 0, 00:36:05.824 "io_path_stat": false, 00:36:05.824 "allow_accel_sequence": false, 00:36:05.824 "rdma_max_cq_size": 0, 00:36:05.824 "rdma_cm_event_timeout_ms": 0, 00:36:05.824 "dhchap_digests": [ 00:36:05.824 "sha256", 00:36:05.824 "sha384", 00:36:05.824 "sha512" 00:36:05.824 ], 00:36:05.824 "dhchap_dhgroups": [ 00:36:05.824 "null", 00:36:05.824 "ffdhe2048", 00:36:05.824 "ffdhe3072", 00:36:05.824 "ffdhe4096", 00:36:05.824 "ffdhe6144", 00:36:05.824 "ffdhe8192" 00:36:05.824 ] 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "bdev_nvme_attach_controller", 00:36:05.824 "params": { 00:36:05.824 "name": "nvme0", 00:36:05.824 "trtype": "TCP", 00:36:05.824 "adrfam": "IPv4", 00:36:05.824 "traddr": "127.0.0.1", 00:36:05.824 "trsvcid": "4420", 00:36:05.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.824 "prchk_reftag": false, 00:36:05.824 "prchk_guard": false, 00:36:05.824 "ctrlr_loss_timeout_sec": 0, 00:36:05.824 "reconnect_delay_sec": 0, 00:36:05.824 "fast_io_fail_timeout_sec": 0, 00:36:05.824 "psk": "key0", 00:36:05.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:05.824 "hdgst": false, 00:36:05.824 "ddgst": false, 00:36:05.824 "multipath": "multipath" 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "bdev_nvme_set_hotplug", 00:36:05.824 "params": { 00:36:05.824 "period_us": 100000, 00:36:05.824 "enable": false 00:36:05.824 } 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "method": "bdev_wait_for_examine" 00:36:05.824 } 00:36:05.824 ] 00:36:05.824 }, 00:36:05.824 { 00:36:05.824 "subsystem": "nbd", 00:36:05.824 "config": [] 00:36:05.824 } 00:36:05.824 ] 00:36:05.824 }' 00:36:05.824 [2024-11-25 13:35:03.353800] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:36:05.825 [2024-11-25 13:35:03.353884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3366933 ] 00:36:05.825 [2024-11-25 13:35:03.418934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.825 [2024-11-25 13:35:03.476957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.082 [2024-11-25 13:35:03.657311] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:06.339 13:35:03 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.339 13:35:03 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:06.339 13:35:03 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:06.339 13:35:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.339 13:35:03 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:06.596 13:35:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:06.596 13:35:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:06.596 13:35:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:06.596 13:35:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.596 13:35:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.596 13:35:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.596 13:35:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.854 13:35:04 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:06.854 13:35:04 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:06.854 13:35:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:06.854 13:35:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.854 13:35:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.854 13:35:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.854 13:35:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:07.111 13:35:04 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:07.111 13:35:04 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:07.111 13:35:04 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:07.111 13:35:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:07.368 13:35:04 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:07.368 13:35:04 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:07.368 13:35:04 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FwjuwZBBV5 /tmp/tmp.rniLrLH8Mj 00:36:07.368 13:35:04 keyring_file -- keyring/file.sh@20 -- # killprocess 3366933 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3366933 ']' 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3366933 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3366933 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3366933' 00:36:07.368 killing process with pid 3366933 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@973 -- # kill 3366933 00:36:07.368 Received shutdown signal, test time was about 1.000000 seconds 00:36:07.368 00:36:07.368 Latency(us) 00:36:07.368 [2024-11-25T12:35:05.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.368 [2024-11-25T12:35:05.027Z] =================================================================================================================== 00:36:07.368 [2024-11-25T12:35:05.027Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:07.368 13:35:04 keyring_file -- common/autotest_common.sh@978 -- # wait 3366933 00:36:07.626 13:35:05 keyring_file -- keyring/file.sh@21 -- # killprocess 3365383 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3365383 ']' 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3365383 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3365383 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3365383' 00:36:07.626 killing process with pid 3365383 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@973 -- # kill 3365383 00:36:07.626 13:35:05 keyring_file -- common/autotest_common.sh@978 -- # wait 3365383 00:36:08.191 00:36:08.191 real 0m14.594s 00:36:08.191 user 0m37.104s 00:36:08.191 sys 0m3.191s 00:36:08.191 13:35:05 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:08.191 13:35:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:08.191 ************************************ 00:36:08.191 END TEST keyring_file 00:36:08.191 ************************************ 00:36:08.191 13:35:05 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:08.191 13:35:05 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:08.191 13:35:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:08.191 13:35:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.191 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:36:08.191 ************************************ 00:36:08.191 START TEST keyring_linux 00:36:08.191 ************************************ 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:08.191 Joined session keyring: 224483817 00:36:08.191 * Looking for test storage... 00:36:08.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:08.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.191 --rc genhtml_branch_coverage=1 00:36:08.191 --rc genhtml_function_coverage=1 00:36:08.191 --rc genhtml_legend=1 00:36:08.191 --rc geninfo_all_blocks=1 00:36:08.191 --rc geninfo_unexecuted_blocks=1 00:36:08.191 00:36:08.191 ' 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:08.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.191 --rc genhtml_branch_coverage=1 00:36:08.191 --rc genhtml_function_coverage=1 00:36:08.191 --rc genhtml_legend=1 00:36:08.191 --rc geninfo_all_blocks=1 00:36:08.191 --rc geninfo_unexecuted_blocks=1 00:36:08.191 00:36:08.191 ' 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:08.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.191 --rc genhtml_branch_coverage=1 00:36:08.191 --rc genhtml_function_coverage=1 00:36:08.191 --rc genhtml_legend=1 00:36:08.191 --rc geninfo_all_blocks=1 00:36:08.191 --rc geninfo_unexecuted_blocks=1 00:36:08.191 00:36:08.191 ' 00:36:08.191 13:35:05 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:08.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.191 --rc genhtml_branch_coverage=1 00:36:08.191 --rc genhtml_function_coverage=1 00:36:08.191 --rc genhtml_legend=1 00:36:08.191 --rc geninfo_all_blocks=1 00:36:08.191 --rc geninfo_unexecuted_blocks=1 00:36:08.191 00:36:08.191 ' 00:36:08.191 13:35:05 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:08.191 13:35:05 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:08.191 13:35:05 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.191 13:35:05 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.191 13:35:05 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.191 13:35:05 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.191 13:35:05 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.191 13:35:05 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:08.192 13:35:05 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:08.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:08.192 13:35:05 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:08.192 13:35:05 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:08.192 13:35:05 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:08.192 13:35:05 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:08.192 13:35:05 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:08.192 13:35:05 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:08.192 13:35:05 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:08.192 13:35:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:08.192 13:35:05 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:08.192 13:35:05 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:08.192 13:35:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:08.192 13:35:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:08.192 13:35:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:08.192 13:35:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:08.192 13:35:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:08.449 13:35:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:08.449 /tmp/:spdk-test:key0 00:36:08.449 13:35:05 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:08.449 13:35:05 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:08.449 13:35:05 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:08.449 13:35:05 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:08.449 13:35:05 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:08.449 13:35:05 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:08.449 13:35:05 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:08.449 13:35:05 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:08.449 13:35:05 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:08.449 13:35:05 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:08.449 13:35:05 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:08.449 13:35:05 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:08.450 13:35:05 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:08.450 13:35:05 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:08.450 13:35:05 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:08.450 /tmp/:spdk-test:key1 00:36:08.450 13:35:05 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3367348 00:36:08.450 13:35:05 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:08.450 13:35:05 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3367348 00:36:08.450 13:35:05 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3367348 ']' 00:36:08.450 13:35:05 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.450 13:35:05 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.450 13:35:05 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.450 13:35:05 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.450 13:35:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:08.450 [2024-11-25 13:35:05.938265] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:36:08.450 [2024-11-25 13:35:05.938374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3367348 ] 00:36:08.450 [2024-11-25 13:35:06.003488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.450 [2024-11-25 13:35:06.061683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.707 13:35:06 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:08.707 13:35:06 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:08.707 13:35:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:08.708 [2024-11-25 13:35:06.308227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:08.708 null0 00:36:08.708 [2024-11-25 13:35:06.340301] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:08.708 [2024-11-25 13:35:06.340818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.708 13:35:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:08.708 578480697 00:36:08.708 13:35:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:08.708 700610475 00:36:08.708 13:35:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3367358 00:36:08.708 13:35:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:08.708 13:35:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3367358 /var/tmp/bperf.sock 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3367358 ']' 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:08.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:08.708 13:35:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:08.965 [2024-11-25 13:35:06.405192] Starting SPDK v25.01-pre git sha1 9b3991571 / DPDK 24.03.0 initialization... 00:36:08.965 [2024-11-25 13:35:06.405265] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3367358 ] 00:36:08.965 [2024-11-25 13:35:06.471152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.965 [2024-11-25 13:35:06.530230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.222 13:35:06 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.222 13:35:06 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:09.222 13:35:06 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:09.222 13:35:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:09.480 13:35:06 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:09.480 13:35:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:09.737 13:35:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:09.737 13:35:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:09.995 [2024-11-25 13:35:07.515167] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:09.995 nvme0n1 00:36:09.995 13:35:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:09.995 13:35:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:09.995 13:35:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:09.995 13:35:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:09.995 13:35:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:09.995 13:35:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.253 13:35:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:10.253 13:35:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:10.253 13:35:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:10.253 13:35:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:10.253 13:35:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.253 13:35:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.253 13:35:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:10.511 13:35:08 keyring_linux -- keyring/linux.sh@25 -- # sn=578480697 00:36:10.511 13:35:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:10.511 13:35:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:10.511 13:35:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 578480697 == \5\7\8\4\8\0\6\9\7 ]] 00:36:10.511 13:35:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 578480697 00:36:10.511 13:35:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:10.511 13:35:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:10.768 Running I/O for 1 seconds... 00:36:11.702 10802.00 IOPS, 42.20 MiB/s 00:36:11.702 Latency(us) 00:36:11.702 [2024-11-25T12:35:09.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.702 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:11.702 nvme0n1 : 1.01 10795.81 42.17 0.00 0.00 11778.05 6043.88 17476.27 00:36:11.702 [2024-11-25T12:35:09.361Z] =================================================================================================================== 00:36:11.702 [2024-11-25T12:35:09.361Z] Total : 10795.81 42.17 0.00 0.00 11778.05 6043.88 17476.27 00:36:11.702 { 00:36:11.702 "results": [ 00:36:11.702 { 00:36:11.702 "job": "nvme0n1", 00:36:11.702 "core_mask": "0x2", 00:36:11.702 "workload": "randread", 00:36:11.702 "status": "finished", 00:36:11.702 "queue_depth": 128, 00:36:11.702 "io_size": 4096, 00:36:11.702 "runtime": 1.01243, 00:36:11.702 "iops": 10795.808105251721, 00:36:11.702 "mibps": 42.171125411139535, 00:36:11.702 "io_failed": 0, 00:36:11.702 "io_timeout": 0, 00:36:11.702 "avg_latency_us": 11778.053482430281, 00:36:11.702 "min_latency_us": 6043.875555555555, 00:36:11.702 "max_latency_us": 17476.266666666666 00:36:11.702 } 00:36:11.702 ], 00:36:11.702 "core_count": 1 00:36:11.702 } 00:36:11.702 13:35:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:11.702 13:35:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:11.962 13:35:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:11.962 13:35:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:11.962 13:35:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:11.962 13:35:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:11.962 13:35:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:11.962 13:35:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.219 13:35:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:12.219 13:35:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:12.219 13:35:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:12.219 13:35:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:12.219 13:35:09 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:12.219 13:35:09 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:12.219 13:35:09 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:12.219 13:35:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:12.219 13:35:09 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:12.219 13:35:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:12.219 13:35:09 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:12.219 13:35:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:12.478 [2024-11-25 13:35:10.114474] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:12.478 [2024-11-25 13:35:10.114851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef8ce0 (107): Transport endpoint is not connected 00:36:12.478 [2024-11-25 13:35:10.115843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef8ce0 (9): Bad file descriptor 00:36:12.478 [2024-11-25 13:35:10.116842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:12.478 [2024-11-25 13:35:10.116861] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:12.478 [2024-11-25 13:35:10.116888] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:12.478 [2024-11-25 13:35:10.116902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:12.478 request: 00:36:12.478 { 00:36:12.478 "name": "nvme0", 00:36:12.478 "trtype": "tcp", 00:36:12.478 "traddr": "127.0.0.1", 00:36:12.478 "adrfam": "ipv4", 00:36:12.478 "trsvcid": "4420", 00:36:12.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:12.478 "prchk_reftag": false, 00:36:12.478 "prchk_guard": false, 00:36:12.478 "hdgst": false, 00:36:12.478 "ddgst": false, 00:36:12.478 "psk": ":spdk-test:key1", 00:36:12.478 "allow_unrecognized_csi": false, 00:36:12.478 "method": "bdev_nvme_attach_controller", 00:36:12.478 "req_id": 1 00:36:12.478 } 00:36:12.478 Got JSON-RPC error response 00:36:12.478 response: 00:36:12.478 { 00:36:12.478 "code": -5, 00:36:12.478 "message": "Input/output error" 00:36:12.478 } 00:36:12.478 13:35:10 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:12.478 13:35:10 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:12.478 13:35:10 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:12.478 13:35:10 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:12.478 13:35:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:12.478 13:35:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:12.478 13:35:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@33 -- # sn=578480697 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 578480697 00:36:12.735 1 links removed 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@33 -- # sn=700610475 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 700610475 00:36:12.735 1 links removed 00:36:12.735 13:35:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3367358 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3367358 ']' 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3367358 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3367358 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3367358' 00:36:12.735 killing process with pid 3367358 00:36:12.735 13:35:10 keyring_linux -- common/autotest_common.sh@973 -- # kill 3367358 00:36:12.735 Received shutdown signal, test time was about 1.000000 seconds 00:36:12.736 00:36:12.736 Latency(us) 00:36:12.736 [2024-11-25T12:35:10.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.736 [2024-11-25T12:35:10.395Z] =================================================================================================================== 00:36:12.736 [2024-11-25T12:35:10.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:12.736 13:35:10 keyring_linux -- common/autotest_common.sh@978 -- # wait 3367358 00:36:12.736 13:35:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3367348 00:36:12.736 13:35:10 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3367348 ']' 00:36:12.736 13:35:10 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3367348 00:36:12.736 13:35:10 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:12.736 13:35:10 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.736 13:35:10 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3367348 00:36:12.992 13:35:10 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:12.992 13:35:10 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:12.992 13:35:10 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3367348' 00:36:12.992 killing process with pid 3367348 00:36:12.992 13:35:10 keyring_linux -- common/autotest_common.sh@973 -- # kill 3367348 00:36:12.992 13:35:10 keyring_linux -- common/autotest_common.sh@978 -- # wait 3367348 00:36:13.249 00:36:13.249 real 0m5.160s 00:36:13.249 user 0m10.315s 00:36:13.249 sys 0m1.622s 00:36:13.249 13:35:10 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:13.249 13:35:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:13.249 ************************************ 00:36:13.249 END TEST keyring_linux 00:36:13.249 ************************************ 00:36:13.249 13:35:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:13.249 13:35:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:13.249 13:35:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:13.249 13:35:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:13.249 13:35:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:13.249 13:35:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:13.249 13:35:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:13.249 13:35:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:13.249 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:36:13.249 13:35:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:13.249 13:35:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:13.249 13:35:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:13.249 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:36:15.147 INFO: APP EXITING 00:36:15.147 INFO: killing all VMs 00:36:15.147 INFO: killing vhost app 00:36:15.147 INFO: EXIT DONE 00:36:16.529 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:16.529 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:16.529 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:16.529 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:16.529 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:16.529 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:16.529 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:16.529 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:16.529 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:36:16.529 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:16.529 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:16.529 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:16.529 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:16.529 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:16.529 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:16.529 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:16.529 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:17.901 Cleaning 00:36:17.902 Removing: /var/run/dpdk/spdk0/config 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:17.902 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:17.902 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:17.902 Removing: /var/run/dpdk/spdk1/config 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:17.902 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:17.902 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:17.902 Removing: /var/run/dpdk/spdk2/config 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:17.902 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:17.902 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:17.902 Removing: /var/run/dpdk/spdk3/config 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:17.902 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:17.902 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:17.902 Removing: /var/run/dpdk/spdk4/config 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:18.162 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:18.162 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:18.162 Removing: /dev/shm/bdev_svc_trace.1 00:36:18.162 Removing: /dev/shm/nvmf_trace.0 00:36:18.162 Removing: /dev/shm/spdk_tgt_trace.pid3046646 00:36:18.162 Removing: /var/run/dpdk/spdk0 00:36:18.162 Removing: /var/run/dpdk/spdk1 00:36:18.162 Removing: /var/run/dpdk/spdk2 00:36:18.162 Removing: /var/run/dpdk/spdk3 00:36:18.162 Removing: /var/run/dpdk/spdk4 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3044433 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3045358 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3046646 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3046982 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3047675 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3047815 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3048528 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3048659 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3048917 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3050118 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3051045 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3051364 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3051557 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3051870 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3052089 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3052244 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3052403 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3052595 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3052903 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3055393 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3055557 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3055723 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3055731 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3056157 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3056165 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3056595 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3056604 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3056893 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3056904 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3057068 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3057144 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3057577 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3057730 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3057945 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3060169 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3062809 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3069812 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3070229 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3072756 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3073028 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3075564 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3080019 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3082154 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3088540 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3093894 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3095096 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3095770 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3106148 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3108439 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3135915 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3139222 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3143058 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3147349 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3147355 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3148006 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3148659 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3149203 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3149604 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3149725 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3149922 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3150099 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3150121 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3150784 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3151884 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3152480 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3152877 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3152997 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3153146 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3154042 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3154854 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3160228 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3188292 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3191226 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3192405 00:36:18.162 Removing: /var/run/dpdk/spdk_pid3193728 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3193869 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3193978 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3194063 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3194593 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3195917 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3196658 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3197090 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3198713 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3199127 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3199692 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3202576 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3205988 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3205989 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3205990 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3208206 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3212940 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3215708 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3219475 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3220426 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3221406 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3222488 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3225258 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3227843 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3230209 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3234441 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3234444 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3237320 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3237489 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3237619 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3237885 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3237961 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3240748 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3241231 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3244406 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3246262 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3249717 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3253157 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3259650 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3264128 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3264131 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3276608 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3277251 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3278049 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3278460 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3279042 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3279452 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3279940 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3280385 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3282892 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3283042 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3286848 00:36:18.422 Removing: /var/run/dpdk/spdk_pid3287012 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3290374 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3292875 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3299923 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3300444 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3302832 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3303109 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3305744 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3309444 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3312107 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3318483 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3323685 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3324942 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3325611 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3335706 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3337954 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3339970 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3344929 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3345061 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3348522 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3349953 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3351352 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3352212 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3353624 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3354504 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3359902 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3360289 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3360684 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3362243 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3362537 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3362933 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3365383 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3365396 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3366933 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3367348 00:36:18.423 Removing: /var/run/dpdk/spdk_pid3367358 00:36:18.423 Clean 00:36:18.681 13:35:16 -- common/autotest_common.sh@1453 -- # return 0 00:36:18.681 13:35:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:18.681 13:35:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:18.681 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:36:18.681 13:35:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:18.681 13:35:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:18.681 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:36:18.681 13:35:16 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:18.681 13:35:16 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:18.681 13:35:16 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:18.681 13:35:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:18.681 13:35:16 -- spdk/autotest.sh@398 -- # hostname 00:36:18.681 13:35:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:18.938 geninfo: WARNING: invalid characters removed from testname! 00:36:51.024 13:35:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:55.220 13:35:52 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:57.760 13:35:55 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:01.051 13:35:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:03.588 13:36:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:06.879 13:36:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:10.169 13:36:07 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:10.169 13:36:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:10.169 13:36:07 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:10.169 13:36:07 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:10.169 13:36:07 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:10.169 13:36:07 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:10.169 + [[ -n 2974471 ]] 00:37:10.169 + sudo kill 2974471 00:37:10.177 [Pipeline] } 00:37:10.191 [Pipeline] // stage 00:37:10.196 [Pipeline] } 00:37:10.209 [Pipeline] // timeout 00:37:10.214 [Pipeline] } 00:37:10.228 [Pipeline] // catchError 00:37:10.233 [Pipeline] } 00:37:10.247 [Pipeline] // wrap 00:37:10.253 [Pipeline] } 00:37:10.265 [Pipeline] // catchError 00:37:10.274 [Pipeline] stage 00:37:10.276 [Pipeline] { (Epilogue) 00:37:10.288 [Pipeline] catchError 00:37:10.289 [Pipeline] { 00:37:10.301 [Pipeline] echo 00:37:10.303 Cleanup processes 00:37:10.309 [Pipeline] sh 00:37:10.593 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:10.593 3378451 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:10.607 [Pipeline] sh 00:37:10.890 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:10.890 ++ grep -v 'sudo pgrep' 00:37:10.890 ++ awk '{print $1}' 00:37:10.890 + sudo kill -9 00:37:10.890 + true 00:37:10.902 [Pipeline] sh 00:37:11.245 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:21.241 [Pipeline] sh 00:37:21.525 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:21.525 Artifacts sizes are good 00:37:21.539 [Pipeline] archiveArtifacts 00:37:21.546 Archiving artifacts 00:37:21.715 [Pipeline] sh 00:37:21.990 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:22.002 [Pipeline] cleanWs 00:37:22.011 [WS-CLEANUP] Deleting project workspace... 00:37:22.011 [WS-CLEANUP] Deferred wipeout is used... 00:37:22.018 [WS-CLEANUP] done 00:37:22.020 [Pipeline] } 00:37:22.035 [Pipeline] // catchError 00:37:22.045 [Pipeline] sh 00:37:22.325 + logger -p user.info -t JENKINS-CI 00:37:22.332 [Pipeline] } 00:37:22.348 [Pipeline] // stage 00:37:22.353 [Pipeline] } 00:37:22.367 [Pipeline] // node 00:37:22.372 [Pipeline] End of Pipeline 00:37:22.409 Finished: SUCCESS